Normal view

There are new articles available, click to refresh the page.
Today — 23 January 2025Main stream

GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

23 January 2025 at 12:17
GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

You can see the specific steps that a government agency is taking to comply with the Trump administration’s policies against diversity, equity, and inclusion on the agency’s GitHub, which shows it frantically deleting and editing various documents, employee handbooks, Slack bots, and job listings across everything the agency touches. 

18F is a much-hyped government agency within the General Services Administration that was founded under the Obama Administration after the disastrous rollout of Healthcare.gov. It more or less had the specific goal of attracting Silicon Valley talent to the federal government to help the government innovate and make many of its websites and digital services suck less. It is one of the “cooler” federal agencies, and has open sourced many of its projects on GitHub.

GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

GitHub is a website for open source development that shows changes across different “commits,” or changes to code and documentation. In the first days of the Trump administration, 18F’s commit list is full of change logs detailing the administration’s attempts to destroy the concept of diversity, equity, and inclusion. 

GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

The changes show that in the last 48 hours, 18F has edited text and wholesale deleted both internal and external web pages about, for example “Inclusive behaviors,” “healthy conflict and constructive feedback,” “DEIA resources,” and “Diversity, equity, inclusion, and accessibility.” It deleted a webpage about “psychological safety” (which now 404s) deleted all information about the “DE&I leads” at the agency, as well as language for employees that said "Anyone who has issues or concerns related to inclusion or equity in the 18F engineering chapter should feel empowered to reach out to the DE&I Leads.” It has deleted, in various places, the word “inclusion,” as well as the term “affinity groups.” 

GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

It also deleted an internal Slack Bot called “Inclusion Bot,” which is described as being “integrated into Slack and passively listens for words or phrases that have racist, sexist, ableist, or otherwise exclusionary or discriminatory histories or backgrounds. When it hears those words, it privately lets the writer know and offers some suggested alternatives.” 

💡
Do you work for the federal government? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].

It has also notably deleted information intended for improving accessibility for blind and visually impaired employees, which asked employees to use “visual descriptions” when introducing themselves on Zoom meetings.

In a hiring document, the language “Teams should consider factors of equity and complexity of the research when determining compensation for participants on their project” has been changed to “team should consider other factors or complexity of the research.”

GitHub Is Showing the Trump Administration Scrubbing Government Web Pages in Real Time

The Trump administration has not tried to hide that it is trying to delete web pages and employee information across the government. But seeing the change logs pop up as they’re happening on GitHub shows exactly how these changes are being done and how they’re rolling out.

Developer Creates Infinite Maze That Traps AI Training Bots

23 January 2025 at 06:55
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Developer Creates Infinite Maze That Traps AI Training Bots

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It's less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn't appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media. 

“Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time,” they added. “But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop.”

Yesterday — 22 January 2025Main stream

Viral 'Challah Horse' Image Zuckerberg Loved Was Originally Created as a Warning About Facebook's AI Slop

22 January 2025 at 16:08
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Viral 'Challah Horse' Image Zuckerberg Loved Was Originally Created as a Warning About Facebook's AI Slop

The viral AI-generated bread horse image that Mark Zuckerberg “loved” on Tuesday was originally created as a meme by a Polish news organization to warn about the dangers of AI-generated slop on social media. The image became a viral sensation on the Polish internet but broke containment and began going viral more widely; it was then stolen by a totally unrelated real AI spam farm where it has gone megaviral and was ‘loved’ by the Meta CEO.

Called “chałkoń” or “challah horse,” the image was part of a series of AI-generated images created by a Polish news outlet called Donald.pl, which pilloried the AI spam that has taken over Facebook. “This woman baked a challah horse but no one congratulated her,” a page run by Donald called Polska w duźych dawkach (Poland in Large Doses) wrote on January 7

The image was designed as a commentary on AI spam on Facebook, the outlet wrote. But like other AI spam, some people believed it was real, and the image was seen by more than a million people and liked 11,000 times. The English-language, subscriber-funded Noted From Poland originally wrote about this drama if you’d like to learn more. 

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

22 January 2025 at 11:37
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

Meta CEO Mark Zuckerberg “loved” an AI-generated slop image of a horse made out of bread posted by a spam page on Facebook that also posts AI-generated images of children with amputations and regularly circumvents Facebook’s algorithm to link users offsite to ad-laden AI-generated content farms.

The page, “Faithful,” is verified, operated out of Romania, has 1.1 million followers, and regularly goes mega viral with the exact type of AI slop that I have been writing about over the last year. In that sense, it is the perfect encapsulation of the type of spam page that has become dominant on the platform as Meta continues to lean into AI-generated content and pays people for going viral on the site. 

“I made every detail with love, but it seems no one cares,” reads the caption of the image, which has 2.7 million reactions, 193,000 comments, and 98,000 shares as of the time of this writing. Zuckerberg’s interaction with the page was first noticed by Gazpacho Machine, a man who posts reviews of food he eats while taking showers. 

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

When Gazpacho Machine posted about this, I was initially skeptical that Zuckerberg's real account had liked the page (as in, it could have been an imposter), and the image had so many likes that Facebook was initially having trouble loading information about which accounts actually liked the page. Gazpacho Machine sent me a screen recording showing that it was indeed Zuckerberg's real "@zuck" account, and I was later able to verify for myself that this is Zuckerberg's real account:

0:00
/0:10

This bread horse is a variation on a classic type of AI slop that made its way into my very first story about the phenomenon of AI spam on the platform in December 2023, when AI spammers were taking already viral images and running them through image-to-image AI tools to create slight variations of the original viral image. 

The origin story of bread horse is “The Bread House man,” a viral Russian image from the 2010s of a man next to a house he had created from various rolls and baguettes. Catherine Hall, a Facebook user who tracked the early spread of AI spam on Facebook, originally found dozens of AI-generated variations of the Bread House Man. 

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

The “Faithful” page, whose header image says “I Love God. I am proud to say that,” has posted hundreds of AI-generated images over the last few years across a host of genres that are now very familiar to me, and are similar to many pages that are operated by people in the global south trying to make money on Facebook. It has repeatedly posted the same image of an AI generated child who is missing an arm and whose caption says “My mother said I was beautiful, but so far I have not liked anyone” at least three times. It has gotten thousands of likes each time.

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

The page has also posted various images of AI-generated elderly people who are supposedly older than 100 celebrating their birthday, AI sand sculpture images, AI-generated variations of American Idol and America’s Got Talent, AI wood sculptures, AI photos of aging couples, AI ice sculptures, AI knitting, AI ‘I drew a picture’ images, AI families who are also onion farmers, and AI recycled bottle sculptures. Faithful has also posted a fair bit of Donald Trump content, as well as lots of inspirational screenshots of the Bible, reels that are seemingly automatically created from Reddit posts or written by AI, and inspiration porn. 

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

Many of the images are monetized in ways that I have previously reported on. For example, many of the images have captions that ask users to read the first comment for more information; a pinned top comment posted by Faithful will then link off of Facebook to a website that is absolutely loaded with ads. I clicked on a recent link posted by both Faithful and a related page called "Faith Space" about an AI generated stepfather who stood up for his AI generated stepdaughter when she was being bullied and was taken to a website called Daily Home Gardening, which served me many ads for products called “Levitox” and “GlucoReNu,” which showed images of worms and had captions that read “The Lump Of Worms Will Come Out Of You In The Morning. Try It.”

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children
Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

I do not know why Zuckerberg “loved” the AI generated bread horse, but it should be noted that it is harder to errantly “love” something on Facebook than it is to errantly like it. Meta did not respond to a request for comment. It is just one small action by one very rich and powerful person. But it is further evidence that strengthens what we already know: Mark Zuckerberg is not bothered by the AI spam that has turned his flagship invention into a cesspool of human sadness and unreality. In fact, he thinks that AI-generated content is the future of “social” media and Meta believes that one day soon we will all be creating AI-generated profiles that will operate semiautonomously on Meta’s platforms. 

Zuckerberg 'Loves' AI Slop Image From Spam Account That Posts Amputated Children

While I was writing this, Faithful posted another AI-generated image of a grave with the caption “NEW TOYS APPEARED ON MY SON'S GRAVE EVERY DAY, SO I DECIDED TO FIND OUT WHO WAS DOING IT.” The image links in the comments to Daily Home Gardening, which loaded 64 distinct ads, plus an infinite scroll of ads at the bottom of the page. 

Hundreds of Subreddits Are Considering Banning All Links to X

22 January 2025 at 09:04
Hundreds of Subreddits Are Considering Banning All Links to X

Hundreds of subreddits are considering banning all links to X.com in response to Elon Musk’s salute at a Donald Trump inauguration rally that was celebrated by Nazis as being a Nazi salute. The moderators of dozens of those subreddits have said that they have decided they will ban all links to X.

Here is a video scroll of just some of the hundreds of subreddits that have considered the move over the last 24 hours, with many moderators putting the idea up for a vote among a subreddit’s subscribers:

0:00
/1:44

The bans have run the political gamut, with the subreddits for many cities and states (such as r/NewJersey, r/londonOntario, and sports teams banning all Twitter links. r/christianity banned Twitter links with a gif in which Musk’s salute was put side-by-side with a neo-Nazi’s.

Some subreddits are allowing screenshots from Twitter but not direct linking. Big sports subreddits such as r/NFL, r/hockey, r/baseball, and r/nba are all considering a ban, with moderators saying they will announce decisions shortly. A poll in r/baseball shows that users are overwhelmingly in favor of having links banned in the subreddit. 

On r/formula1, moderators decided to ban links from Twitter except in cases where the information can not be found elsewhere, and specifically from Formula 1 drivers and some others who haven’t yet moved to other platforms. “For a trial period we will ban all content from Twitter with the only exception of screenshots of relevant posts by teams, drivers & F1 that are not available on any other platform. Even in case of major breaking news, we ask you to post links to the press releases or a screenshot of the post from Instagram, with a link in the comments.” The ban is a trial, with the hope that it will encourage Formula 1 journalists and creators to move to other platforms, it says.

Moderators of r/ComicBookMovies posted “With recent events, we (mods) have decided we will no longer use our sub to promote x in any way, shape or form. Following in the footsteps of other subs, we will ban any links or post coming from x, including in the comment sections. While many of our post come from x, moving forward all post will need to come from another source, such as bluesky, approved websites, your own creation, etc.”

The moderators of r/MadeMeSmile, which has more than 10 million subscribers, posted “Would it make you smile if we banned all links to Twitter?” The moderators of r/DnD are similarly considering the move. Posts about banning or potentially banning Twitter links are some of the most popular posts on all of Reddit over the last day. 

Some of the subreddits are also banning material from Meta platforms after Mark Zuckerberg explicitly decided to allow hate to proliferate across his apps, and TikTok and Rednote because of their companies being based in China. “Hello, everyone! Following recent events in social media, we are updating our content policy,” a message on r/antiwork reads. It then says that the following sites may no longer be linked to, or have screenshots from them uploaded: “X, including content from its predecessor Twitter, because Elon Musk promotes white supremacist ideology and gave a Nazi salute during Donald Trump's inauguration,” “Any platform owned by Meta, such as Facebook and Instagram, because Mark Zuckerberg openly encourages bigotry with Meta's new content policy,” and “Platforms affiliated with the CCP, such as TikTok and Rednote, because China is a hostile foreign government and these platforms constitute information warfare.”

The concerted mass action to ban Twitter links is notable because it highlights the difference between moderation on a platform like Reddit and the moderation on platforms like X, Facebook, and Instagram. Reddit’s distributed, volunteer moderation system in which users become the moderators of specific subreddits means they are also in charge of the rules and norms of that subreddit. This means that moderators are, to some extent, beholden to the wishes of subscribers of a subreddit. They are also able to create rules that fit the needs and wants of a specific community. Reddit’s system has many problems: moderators are constantly fighting with Reddit administrators, who are paid Reddit employees; moderators are not paid for their labor; moderators often say they feel burnt out. But the system also fosters a more humane version of the internet where users have more control and the needs of a specific community can be more easily met. 

Medical Device Company Tells Hospitals They're No Longer Allowed to Fix Machine That Costs Six Figures

22 January 2025 at 07:24
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Medical Device Company Tells Hospitals They're No Longer Allowed to Fix Machine That Costs Six Figures

The manufacturer of a machine that costs six figures used during heart surgery has told hospitals that it will no longer allow hospitals’ repair technicians to maintain or fix the devices and that all repairs must now be done by the manufacturer itself, according to a letter obtained by 404 Media. The change will require hospitals to enter into repair contracts with the manufacturer, which will ultimately drive up medical costs, a person familiar with the devices said. 

The company, Terumo Cardiovascular, makes a device called the Advanced Perfusion System 1 Heart Lung Machine, which is used to reroute blood during open-heart surgeries and essentially keeps a patient alive during the surgery. Last month, the company sent hospitals a letter alerting them to the “discontinuation of certification classes,” meaning it “will no longer offer certification classes for the repair and/or preventative maintenance of the System 1 and its components.” 

This means it will no longer teach hospital repair techs how to maintain and fix the devices, and will no longer certify in-house hospital repair technicians. Instead, the company “will continue to provide direct servicing for the System 1 and its components.” 

On the surface, this may sound like a reasonable change, but it is one that is emblematic of a larger trend in hospitals. Medical device manufacturers are increasingly trying to prevent hospitals' own in-house staff from maintaining and repairing broken equipment, even when they are entirely qualified to do so. And in some cases, technicians who know how to repair specific devices are being prevented from doing so because manufacturers are revoking certifications or refusing to provide ongoing training that they once offered. Terumo certifications usually last for two years. It told hospitals that “your current certification will remain valid through its expiration date but will not be renewed once it expires.”

Before yesterdayMain stream

Decentralized Social Media Is the Only Alternative to the Tech Oligarchy

21 January 2025 at 09:33
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Decentralized Social Media Is the Only Alternative to the Tech Oligarchy

If it wasn’t already obvious, the last 72 hours have made it crystal clear that it is urgent to build and mainstream alternative, decentralized social media platforms that are resistant to government censorship and control, are not owned by oligarchs and dominated by their algorithms, and in which users own their follower list and can port it elsewhere easily and without restriction.

Besides all of the “normal” problems with corporate social media—the surveillance capitalism, the AI spam, the opaque algorithms—let’s take stock of what has happened in the last few days. 

First, millions of small business owners and influencers who make a living on TikTok were left to beg their followers in TikTok’s last moments to follow them elsewhere in hopes of being able to continue their businesses on other corporate social media platforms. This had the effect of fracturing and destroying people’s audiences overnight, with one act of government. 

TikTok has since come back, but it is still unclear what the future of the platform is, and TikTok now exists at the whim of President Trump and is beholden to him to an unknown extent. TikTok’s status in the Untied States is still up in the air—it is still not available for download in the iOS App Store or the Google Play Store, and it could disappear at any moment if service providers like Oracle decide that Trump’s executive order and assurances that they will not be prosecuted or fined are not enough assurance to keep the app online. 

Elon Musk, who had already turned X into a cesspool of hate and an overt tool to get President Trump elected, is now formally part of the Trump administration, meaning the platform is literally owned by a member of the Trump White House. 

Meta has made an overt shift to the right, and Mark Zuckerberg has himself become a Trump booster. The platform is making its content moderation worse, has declared that immigrants and LGBTQ+ people are legitimate targets for hate speech, and has made many of these changes at the behest of the Trump White House and Stephen Miller, according to The New York Times

Donald Trump Has Mark Zuckerberg By the Balls

16 January 2025 at 09:00
Donald Trump Has Mark Zuckerberg By the Balls

Mark Zuckerberg can see the finish line. He is so close to getting what he has wanted for years. The U.S. government is trying to give him the greatest gift he could possibly imagine: A TikTok ban. This would be U.S. intervention against the most credible competitor Meta has seen in years, and U.S. intervention to kill a superior product to the benefit of an American company.

On Joe Rogan last week, Zuckerberg said that the U.S. government “should be defending its companies, not be the tip of the spear attacking its companies.” And yet, in this case, the U.S. government—the Biden administration that he has been railing against as he pivots to MAGA—has squarely aimed its spear at Meta’s biggest, most credible competitor in a move that would greatly benefit Zuckerberg and his company.

Everything that Zuckerberg is doing right now–Meta’s shift rightward; its dehumanizing of immigrants and LGBTQ+ users and employees; its move away from diversity-focused hiring; his trips to Mar-a-Lago; removing tampons from the men’s bathrooms at Meta offices; the inauguration party–should be seen in the broader context that Meta would benefit enormously from a TikTok ban and that Donald Trump, who will be sworn in as president on Monday, is the one person who, at this point, credibly has the ability to reverse a ban. 

Zuckerberg’s very public pledge of fealty to Trump has multiple purposes, of course. Trump previously threatened to put Zuckerberg in jail, and he is obviously cozying up to an administration that he hopes will not regulate his companies. But a TikTok ban is the biggest potential prize. Trump has Zuckerberg by his apparently very masculine balls, and is positioning himself as being the ultimate decider on what will happen to TikTok.

Zuckerberg’s political persuasions and positions have always shifted with whatever suits his companies most at that moment in time, which is something that became more clear as I went back through many hours of Congressional testimony and political speeches that Zuckerberg has given over the last few years. One thing that has not changed, however, is Zuckerberg’s obsession with using the specter of Chinese internet dominance and competition to both avoid consequences for his own company and to lay the groundwork for government regulation on Chinese platforms like TikTok.

Meta has denied directly lobbying on the TikTok ban, but the company spent a record sum lobbying in 2024, including on “Homeland Security” topics. In March 2022, the Washington Post reported that Meta paid a firm called Targeted Victory to push the narrative that TikTok is dangerous to children. And Zuckerberg himself has spent the last five years painting a picture to Congress that his monopolistic company faces great competition, actually, from Chinese companies and more importantly from China itself. This story has served Meta extraordinarily well, as he has been able to distract from Meta’s myriad privacy violations and monopolistic actions by saying it would be worse if China wins. Meta is not a monopoly, he says. It is a company fighting on behalf of America against China and Chinese companies for the soul of the internet.

Zuckerberg made this argument most clear at a speech at Georgetown University in 2019. 

“The larger question about the future of our global internet. You know China is building its own internet focused on very different values, and it’s now exporting their vision of the internet to other countries,” Zuckerberg said. “Until recently, the internet in almost every country outside of China has been defined by American platforms with free expression values. But there’s no guarantee that those values will win out. A decade ago almost all of the major internet platforms were American. Today, six of the top 10 are Chinese and we’re beginning to see this in social media too.”

“While our services like WhatsApp are used by protesters and activists everywhere due to strong encryption and privacy protections, on TikTok, the Chinese app growing quickly around the world, mentions of these same protests are censored even here in the US,” he added.

A few days after Zuckerberg’s Georgetown speech, Zuckerberg had a seven-hour hearing before the House Financial Services Committee in which he stated that an entire cryprocurrency system Facebook was spinning up called Libra was so incredibly important to U.S. financial and cultural dominance that if Congress imposed restrictions on it, Xi Jinping would win. (Libra, later called Diem and then sold to Silvergate Bank in 2022, is now dead.)

“I think there are completely valid questions about how a project like this would impact America’s financial leadership, our ability to impose sanctions around the world, our oversight of the financial system in a lot of places,” Zuckerberg said. “And I just think that we need to trade off and think about and weigh any risks of a new system against what I think are surely risks if a Chinese financial system becomes the standard.”

This general reasoning from Zuckerberg prompted Rep. Anthony Gonzalez to say that “you’re painting this as if we don’t do it, China will do it. I think you’ll be hard pressed to find somebody who is more of a hawk on China in this committee. So I agree with that. What I don’t think is the right frame is ‘If Mark Zuckerberg and Facebook don’t do it, then Xi Jinping will.’ Like, this isn’t Mark Zuckerberg versus Xi Jinping … Facebook doing this, frankly, I don’t trust it and I don’t believe the American people trust it.”

This endeavor of watching Zuckerberg’s old testimony made clear that he lied repeatedly on Joe Rogan last week on all sorts of things. For example, he practically begged Congress to regulate Meta in July 2020—during the Trump administration—while groveling about how seriously the company takes things like COVID misinformation and election integrity. On Rogan, he suggested such concerns were foisted upon him by the media and the Biden administration. Zuckerberg also seemed dumbfounded on Rogan that the Consumer Financial Protection Bureau and Elizabeth Warren were interested in his company, which again, notoriously attempted to launch an entirely new global monetary system with Libra.

“We had organizations that were looking into us that were, like, not really involved with social media,” Zuckerberg told Rogan. “Like, I think, like, the CFPB, like, this financial I don't even know what it stands for. It's the, it's the financial organization, that Elizabeth Warren had set up. And it's basically like, we're not a bank.”

During another Congressional hearing in 2018, Zuckerberg’s prepared notes said “Break Up FB? U.S. tech companies key asset for America; break up strengthens Chinese companies.” And, again, in 2020 he told Congress: “If you look at where the top technology companies come from, the vast majority a decade ago were America. Today, almost half are Chinese … Facebook stands for a set of basic principles. Giving people voice and economic opportunity. Keeping people safe. Upholding democratic traditions like freedom of expression and voting and enabling an open and competitive marketplace. These are fundamental values for most of us, but not for everyone in the world. Not every company we compete with or the countries they represent. As global competition increases, there is no guarantee that our values will win out.” 

Meta, and, specifically, Instagram Reels, would be the most obvious beneficiary of a TikTok ban. TikTok has now what Facebook once had, and which Instagram has but is losing: Deep cultural relevance and a generation of users who love it. Facebook and Instagram still have billions of users, but AI spam, a terrible algorithm that seemingly universally surfaces cringe, and a huge number of bots and people who post like their brains are made of mashed potatoes have made both Facebook and Instagram feel like platforms that people remain on begrudgingly, not because they actually want to be there. 

Zuckerberg’s general narrative that Meta faces intense competition from China and that Chinese social media companies cannot be trusted persisted across all of his Congressional hearings, and he has repeatedly used the specter of China exporting its cultural values via social media as a shield to deflect from his own company’s monopolistic tendencies, its privacy violations, and its harms against children. He has used this not only to avoid regulation of his company and his platforms, but also to seed the ground for what ultimately became the TikTok ban. Zuckerberg can see the prize. The question is whether, by kissing Trump’s ass, he will actually finally get it.

FTC Sues John Deere Over Its Repair Monopoly

15 January 2025 at 08:56
FTC Sues John Deere Over Its Repair Monopoly

The Biden administration and the states of Illinois and Minnesota sued tractor and agricultural manufacturer John Deere Wednesday, arguing that the company’s anti consumer repair practices have driven up prices for farmers and have made it difficult for them to get repairs during critical planting and harvesting seasons. The lawsuit alleges that Deere has monopoly power over the repair market, which 404 Media has been reporting on for years.

The lawsuit, filed by the Federal Trade Commission and the attorney generals of Illinois and Minnesota, is the latest and most serious legal salvo against Deere’s repair monopoly. Deere is also facing a class-action lawsuit related to its repair practices from consumers in Illinois that the Department of Justice and other federal entities have signaled they are interested in and support, as we reported last year. 

“The Federal Trade Commission today files suit against agricultural equipment manufacturer Deere & Company, stating that it has illegally restricted the ability of farmers and independent technicians to repair Deere equipment, including tractors and combines,” FTC commissioner Lina Khan wrote in a formal comment explaining the decision. 

💡
Do you work at John Deere or the FTC? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].

Deere has become notorious for cornering the repair market on its machines, which include tractors, combines, and other major agricultural equipment by introducing software locks that prevent farmers from fixing the equipment they buy without the authorization of John Deere.

It has also made repair parts difficult to come by. Deere previously promised to make certain repairs easier for consumers with a “Memorandum of Understanding” (MOU) signed with a farming organization that would have made it possible for farmers to do some repairs and obtain some specific parts; implementation of that MOU has been incredibly uneven, according to farmers. In October, Sen. Elizabeth Warren said that Deere was not honoring that agreement and demanded answers to several questions about it; Deere has not yet responded.

The FTC lawsuit specifically states that that MOU was designed to kill right to repair legislation and repair regulation against the company.

"Deere invoked its release of Customer Service ADVISOR and its MOU with the Farm Bureau to stymie state 'right-to-repair' legislation that would otherwise have required Deere to make fully functional repair tools available to customers," it stated.

It also highlights the fact that Deere has released a version of its repair software, called "Service Advisor," to the public (which costs $3,160 per year). But the version of the software released to the public can only do certain repairs and is not fully functional.

"Deere offers two versions of its electronic repair tool: (1) Full-Function Service ADVISOR, a fully functional repair tool that Deere makes available only to Deere dealers, and (2) a degraded Customer Service ADVISOR, which Deere licenses to equipment owners, IRPs [Independent repair providers], and others," it states.

"Deere has acquired and maintained monopoly power in a relevant market for the provision of repair services that require the use of a fully functional repair tool. Through its limited distribution of the repair tool, Deere controls entry into, and limits output in, the provision of such services," the lawsuit added. "As a consequence, Deere’s dealers are able to maintain a 100% market share and charge supracompetitive prices for restricted repairs, and Deere itself reaps additional profits through parts sales."

Farmers have told 404 Media that they remain unable to do many types of repairs, and that it can sometimes take days for “authorized” John Deere or John Deere dealer technicians to come fix broken equipment. In farming, this delay can result in lost harvest, crucial delays in planting, and dying crops during critical periods of the farming season. 

“These delays can mean that months of hard work and much-needed income vanish, devastating their business. In rural communities, the restrictions can sometimes mean that farmers need to drive hours just to get their equipment fixed,” Khan wrote. “For those who have long fixed their own equipment, these artificial restrictions can seem especially inefficient, with tractors needlessly sitting idle as farmers and independent mechanics are held back from using their skill and talent.”

The lawsuit, in the waning days of the Biden administration, is the most serious punitive act the federal government has ever taken to break up a repair monopoly and to support consumers’ right to repair. For years, the FTC has issued reports about repairability and manufacturer dominance of the repair market, but aside from a few small fines, has not formally sued any company. The steps Deere has taken to secure a repair monopoly are among the most egregious of any manufacturer in any industry, which has led farmers in some cases to resort to hacking their own tractors for the purposes of repair, sometimes using software pirated from Ukraine and other countries

“We shouldn’t tolerate companies blocking repair,” Nathan Proctor, consumer rights group PIRG's Senior Right to Repair Campaign Director, said. “When you buy something, you should be able to do whatever you want with it. The FTC’s enforcement action will help farmers, and everyone else who believes people should be able to fix their stuff.”

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed

13 January 2025 at 09:06
Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed

Meta is deleting links to Pixelfed, a decentralized Instagram competitor. On Facebook, the company is labeling links to Pixelfed.social as “spam” and deleting them immediately. 

Pixelfed is an open-source, community funded and decentralized image sharing platform that runs on Activity Pub, which is the same technology that supports Mastodon and other federated services. Pixelfed.social is the largest Pixelfed server, which was launched in 2018 but has gained renewed attention over the last week.

Bluesky user AJ Sadauskas originally posted that links to Pixelfed were being deleted by Meta; 404 Media then also tried to post a link to Pixelfed on Facebook. It was immediately deleted. 

Meta Is Blocking Links to Decentralized Instagram Competitor Pixelfed

Pixelfed is experiencing a surge in user signups in recent days, after Meta announced that it would loosen its rules to allow users to call LGBTQ+ people “mentally ill” amid a host of other changes that shift the company overtly to the right. Meta and Instagram have also leaned heavily into AI-generated content. Pixelfed announced earlier Monday that it is launching an iOS app later this week. 

Pixelfed said Sunday it is “seeing unprecedented levels of traffic to pixelfed.social.”

Over the weekend, Daniel Supernault, the creator of Pixelfed, published a “declaration of fundamental rights and principles for ethical digital platforms, ensuring privacy, dignity, and fairness in online spaces.” The open source charter, which has been adopted by Pixelfed and can be adopted by other platforms, contains sections titled “right to privacy,” “freedom from surveillance,” “safeguards against hate speech,” “strong protections for vulnerable communities,” and “data portability and user agency.” 

“Pixelfed is a lot of things, but one thing it is not, is an opportunity for VC or others to ruin the vibe. I've turned down VC funding and will not inject advertising of any form into the project,” Supernault wrote on Mastodon. “Pixelfed is for the people, period.”

Meta did not respond to a request for comment.

Meta Deletes Trans and Nonbinary Messenger Themes

10 January 2025 at 09:08
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Meta Deletes Trans and Nonbinary Messenger Themes

Meta deleted nonbinary and trans themes for its Messenger app this week, around the same time that the company announced it would change its rules to allow users to declare that LGBTQ+ people are “mentally ill,” 404 Media has learned.

Meta’s Messenger app allows users to change the color scheme and design of their  chat windows with different themes. For example, there is currently a “Squid Game” theme, a “Minecraft” theme, a “Basketball” theme, and a “Love” theme, among many others. 

These themes regularly change, but for the last few years they have featured a “trans” theme and a “nonbinary” theme, which had color schemes that matched the trans pride flag and the non-binary pride flag. Meta did not respond to a request for comment about why the company removed these themes, but the change comes right as Mark Zuckerberg’s company is publicly and loudly shifting rightward to more closely align itself with the views of the incoming Donald Trump administration. 404 Media reported Thursday that many employees are protesting the anti LGBTQ+ changes and that “it’s total chaos internally at Meta right now” because of the changes.

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

The trans theme was announced for Pride Month in June 2021, and the nonbinary theme was announced in June 2022 in blog posts that highlighted Meta’s apparent support for trans and nonbinary people. Both of these posts are no longer online. Other blogs about updates to Messenger have been moved over from the old website they were originally published on to new URLs on the Meta newsroom, but these two blog posts have not.

“This June and beyond, we want people to #ConnectWithPride because when we show up as the most authentic version of ourselves, we can truly connect with people,” the post announcing the trans theme originally said. “Starting today, in support of the LGBTQ+ community and allies, Messenger is launching new expression features and celebrating the artists and creators who not only developed them, but inspire us each and every day.” 

‘It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

9 January 2025 at 12:58
‘It’s Total Chaos Internally at Meta Right Now’: Employees Protest Zuckerberg’s Anti LGBTQ Changes

Meta employees are furious with the company’s newly announced content moderation changes that will allow users to say that LGBTQ+ people have “mental illness,” according to internal conversations obtained by 404 Media and interviews with five current employees. The changes were part of a larger shift Mark Zuckerberg announced Monday to do far less content moderation on Meta platforms. 

“I am LGBT and Mentally Ill,” one post by an employee on an internal Meta platform called Workplace reads. “Just to let you know that I’ll be taking time out to look after my mental health.” 

On Monday, Mark Zuckerberg announced that the company would be getting “back to our roots around free expression” to allow “more speech and fewer mistakes.” The company said “we’re getting rid of a number of restrictions on topics like immigration, gender identity, and gender that are the subject of frequent political discourse and debate.” A review of Meta’s official content moderation policies show, specifically, that some of the only substantive changes to the policy were made to specifically allow for “allegations of mental illness or abnormality when based on gender or sexual orientation.” It has long been known that being LGBTQ+ is not a sign of “mental illness,” and the false idea that sexuality or gender identification is a mental illness has long been used to stigmatize and discriminate against LGBTQ+ people.

Earlier this week, we reported that Meta was deleting internal dissent about Zuckerberg's appointment of UFC President Dana White to the Meta board of directors.

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

‘We’re Fine’: Lying to Ourselves About a Climate Disaster

9 January 2025 at 08:47
‘We’re Fine’: Lying to Ourselves About a Climate Disaster

In 2020, after walking by refrigerated trailers full of the bodies of people who died during the first wave of the COVID-19 pandemic one too many times, my fiancé and I decided that it would maybe be a good idea to get out of New York City for a while. Together with our dog, we spent months driving across the country and eventually made it to Los Angeles, where we intended to stay for two weeks. We arrived just in time for the worst COVID spike since the one we had just experienced in New York. It turned out we couldn’t and didn’t want to leave. Our two week stay has become five years.

While debating whether we were going to move to Los Angeles full time, my partner and I joked that we had to choose between the “fire coast” and the “water coast.” New York City had been getting pummeled by a series of tropical storms and downpours, and vast swaths of California were fighting some of the most devastating wildfires it had ever seen. We settled on the fire coast, mostly to try something new. 

It turns out this was a false choice. Since we’ve moved to Los Angeles, we have experienced the heaviest rains in the city’s recorded history, the first hurricane to ever trigger a tropical storm warning in Los Angeles, and, of course, the fires. New York City, meanwhile, has had both tropical storms and this summer fought an out-of-control brushfire in Prospect Park after a record drought. Both coasts are the fire coast, and the water coast. 

We have been very lucky, and very privileged. Our apartment is in Venice Beach, which is probably not going to burn down. This time, we will not lose our lives, our things, our memories. We had the money and the ability to evacuate from Los Angeles on Wednesday morning after it became clear to us that we should not stay. What is happening is a massive tragedy for the city of Los Angeles, the families who have lost their homes, businesses and schools. 

I am writing this to try to understand my place in a truly horrifying event, and to try to understand how we are all supposed to process the ongoing slow- and fast-moving climate change-fueled disasters that we have all experienced, are experiencing, and will definitely experience in the future. My group chats and Instagram stories are full of my friends saying that they are fine, followed by stories and messages explaining that actually, they are not fine. Stories that start with “we’re safe, thank you for asking” have almost uniformly been followed with “circumstances have changed, we have evacuated Los Angeles.” Almost all of my friends in the city have now left their homes to go somewhere safer; some people I know have lost their homes.

I knew when I moved to Los Angeles that we would to some extent experience fires and earthquakes. I live in a “tsunami hazard zone.” I also know that there is no place that is safe from climate change and climate-fueled disaster, as we saw last year when parts of North Carolina that were considered to be “safer” from climate change were devastated by Hurricane Helene.

We are living in The Cool Zone, and, while I love my life, am very lucky, and have been less directly affected by COVID, political violence, war, and natural disasters than many people, I am starting to understand that maybe this is all taking a toll. Firefighters and people who have lost their homes are experiencing true hell. What I am experiencing is something more like the constant mundanity of dystopia that surrounds the direct horror but is decidedly also bad.

I knew it would be windy earlier this week because I check the surf forecast every day on an app called Surfline, which has cameras and weather monitoring up and down nearly every coast in the world. The Santa Ana winds—a powerful wind phenomenon I learned about only after moving to California—would be offshore, meaning they would blow from the land out to sea. This is somewhat rare in Los Angeles and also makes for very good, barreling waves. I was excited. 

I had a busy day Tuesday and learned about the fire because the Surfline cameras near the fire were down. In fact, you can see what it looked like as the fires overtook the camera at Sunset Point here: 

0:00
/0:18

The camera livestream was replaced with a note saying “this camera is offline due to infrastructure issues caused by local wildfires.” The surf forecast did not mention anything about a fire. 

0:00
/0:13

I walked out to the beach and could see the mountains on fire, the smoke plumes blowing both out to sea and right over me. The ocean was indeed firing—meaning the waves were good—and lots of people were surfing. A few people were milling around the beach taking photos and videos of the fire like I was. By the time the sun started setting, there were huge crowds of people watching the fire. It was around this time that I realized I was having trouble breathing, my eyes were watering, and my throat was scratchy. My family locked ourselves into our bedroom with an air purifier running. Last week, we realized that we desperately needed to replace the filter, but we did not. A friend told us the air was better near them, so we went to their house for dinner. 

While we were having dinner, the size of the fire doubled, and a second one broke out. Our phones blared emergency alerts. We downloaded Watch Duty, which is a nonprofit wildfire monitoring app. Most of the wildfire-monitoring cameras in the Pacific Palisades had been knocked offline; the ones in Santa Monica pointing towards the Palisades showed a raging fire.

‘We’re Fine’: Lying to Ourselves About a Climate Disaster

Every few minutes the app sent us push notifications that the fire was rapidly expanding, that firefighters were overwhelmed, that evacuation orders had expanded and were beginning to creep toward our neighborhood. I opened Instagram and learned that Malibu’s Reel Inn, one of our favorite restaurants, had burned to the ground.

Apple Intelligence began summarizing all of the notifications I was getting from my various apps. “Multiple wildfires in Los Angeles, causing destruction and injuries,” from the neighborhood watch app Citizen, which I have only because of an article I did about the last time there was a fire in Pacific Palisades. Apple Intelligence’s summary of a group chat I’m in: “Saddened by situation; Instagram shared.” From a friend: "Wants to chat about existential questions." A summary from the LA Times: “Over 1,000 structures burned in LA Count wildfires; firefighter were overwhelmed.” From Nextdoor: “Restaurants destroyed.” 

‘We’re Fine’: Lying to Ourselves About a Climate Disaster
‘We’re Fine’: Lying to Ourselves About a Climate Disaster

Earlier on Tuesday, I texted my mom “yes we are fine, it is very far away from us. It is many miles from us. We have an air purifier. It’s fine.” I began to tell people who asked that the problem for us was "just" the oppressive smoke, and the fact that we could not breathe. By the time we were going to bed, it became increasingly clear that it was not necessarily fine, and that it might be best if we left. I opened Bluesky and saw an image of a Cybertruck sitting in front of a burnt out mansion. A few posts later, I saw the same image but a Parental Advisory sticker had been photoshopped onto it. I clicked over to X and saw that people were spamming AI generated images of the fire.

‘We’re Fine’: Lying to Ourselves About a Climate Disaster

We began wondering if we should drive toward cleaner air. We went home and tried to sleep. I woke up every hour because I was having trouble breathing. As the sun was supposed to be rising in the morning, it became clear that it was being hidden by thick clouds of smoke. 

Within minutes of waking up, we knew that we should leave. That we would be leaving. I opened Airbnb and booked something. We do not have a “Go Bag,” but we did have time to pack. I aimlessly wandered around my apartment throwing things into bags and boxes, packing things that I did not need and leaving things that I should have brought. In the closet, I pushed aside our boxes of COVID tests to get to our box of N-95 masks. I packed a whole microphone rig because I need to record a podcast Friday. 

I emailed the 404 Media customers who bought merch and told them it would be delayed because I had to leave my home and cannot mail them. I canceled meetings and calls with sources who I wanted to talk to. 

Our next-door neighbor texted us, saying that she would actually be able to make it to a meeting next week with our landlord with a shared beef we’re having with them. Originally she thought she would have to work during the time the meeting was scheduled. She works at a school in the Palisades. Her school burned down. So had her sister’s house. I saw my neighbor right before we left. I told her I would be back on Friday. I had a flashback to my last day in the VICE office in March 2020, when they sent us home for COVID. I told everyone I would see them in a week or two. Some of those people I never saw again.

‘We’re Fine’: Lying to Ourselves About a Climate Disaster
Image: Jason Koebler

A friend texted me to tell me that the place we had been on a beautiful hike a few weeks ago was on fire: “sad and glad we went,” he said. A friend in Richmond, Virginia texted to ask if I was OK. I told him yes but that it was very scary. I asked him how he was doing. He responded, “We had a bad ice storm this week and that caused a power outage at water treatment that then caused server crashes and electrical equipment to get flooded. The whole city has been without water since Monday.” He told me he was supposed to come to Los Angeles for work this weekend. He was canceling his flight.

A group chat asked me if I was OK. I told them that I did not want to be dramatic but that we were having a hard time but were ultimately safe. I explained some of what we had been doing and why. The chat responded saying that “it’s insane how you start this by saying it sounds more dramatic than it is, only to then describe multiple horrors. I am mostly just glad you are safe.”

We got in the car. We started driving. I watched a driverless Waymo navigate streets in which the traffic lights were out because the power was out. My fiancé took two work meetings on the road, tethered to her phone, our dog sitting on her lap. We stopped at a fast food drive through.

Once we were out of Los Angeles, I stopped at a Best Buy to get an air purifier. On my phone, I searched the reviews for the one they had on sale. I picked one out. The employee tried to sell me an extended warranty plan. I said no thank you, got back in the car, and kept driving away from the fire. I do not know when we will be able to go back.

Researcher Turns Insecure License Plate Cameras Into Open Source Surveillance Tool

7 January 2025 at 09:17
Researcher Turns Insecure License Plate Cameras Into Open Source Surveillance Tool

Some Motorola automated license plate reader surveillance cameras are live-streaming video and car data to the unsecured internet where anyone can watch and scrape them, a security researcher has found. In a proof-of-concept, a privacy advocate then developed a tool that automatically scans the exposed footage for license plates, and dumps that information into a spreadsheet, allowing someone to track the movements of others in real time.

Matt Brown of Brown Fine Security made a series of YouTube videos showing vulnerabilities in a Motorola Reaper HD ALPR that he bought on eBay. As we have reported previously, these ALPRs are deployed all over the United States by cities and police departments. Brown initially found that it is possible to view the video and data that these cameras are collecting if you join the private networks that they are operating on. But then he found that many of them are misconfigured to stream to the open internet rather than a private network.

“My initial videos were showing that if you’re on the same network, you can access the video stream without authentication,” Brown told 404 Media in a video chat. “But then I asked the question: What if somebody misconfigured this and instead of it being on a private network, some of these found their way onto the public internet?” 

In his most recent video, Brown shows that many of these cameras are indeed misconfigured to stream both video as well as the data they are collecting to the open internet and whose IP addresses can be found using the Internet of Things search engine Censys. The streams can be watched without any sort of login.

In many cases, they are streaming color video as well as infrared black-and-white video of the streets they are surveilling, and are broadcasting that data, including license plate information, onto the internet in real time. 

0:00
/0:12

Will Freeman, the creator of DeFlock, an open-source map of ALPRs in the United States, said that people in the DeFlock community have found many ALPRs that are streaming to the open internet. Freeman built a proof of concept script that takes data from unencrypted Motorola ALPR streams, decodes that data, and adds timestamped information about specific car movements into a spreadsheet. A spreadsheet he sent me shows a car’s make, model, color, and license plate number associated with the specific time that they drove past an unencrypted ALPR near Chicago. So far, roughly 170 unencrypted ALPR streams have been found.

“Let’s say 10 of them are in a city at strategic locations. If you connect to all 10 of them, you’d be able to track regular movements of people,” Freeman said. 

Researcher Turns Insecure License Plate Cameras Into Open Source Surveillance Tool
i
Researcher Turns Insecure License Plate Cameras Into Open Source Surveillance Tool

Freeman told 404 Media that this fact is more evidence that the proliferation of ALPRs around the United States and the world represents a significant privacy risk, and Freeman has been a strong advocate against the widespread adoption of ALPRs. 

“I’ve always thought these things were concerning, but this just goes to show that law enforcement agencies and the companies that provide ALPRs are no different than any other data company and can’t be trusted with this information,” Freeman told 404 Media. “So when a police department says there’s nothing to worry about unless you’re a criminal, there definitely is. Here’s evidence of a ton of cameras operated by law enforcement freely streaming sensitive data they’re collecting on us. My hometown is mostly Motorola [ALPRs], so someone could simply write a script that maps vehicles to times and precise locations.”

A Motorola Solutions spokesperson told 404 Media that the company is working on a firmware update that “will introduce additional security hardening.”

“Motorola Solutions designs, develops and deploys our products to prioritize data security and protect the confidentiality, integrity and availability of data,” the spokesperson said. “The ReaperHD camera is a legacy device, sales of which were discontinued in June 2022. Findings in the recent YouTube videos do not pose a risk to customers using their devices in accordance with our recommended configurations. Some customer-modified network configurations potentially exposed certain IP addresses. We are working directly with these customers to restore their system configurations consistent with our recommendations and industry best practices. Our next firmware update will introduce additional security hardening.”

This is not the first time that ALPRs have been found to be streaming directly to the unsecured internet. In 2015, the Electronic Frontier Foundation and researchers at the University of Arizona found hundreds of exposed ALPR streams. In 2019, an ALPR vendor for the Department of Homeland Security was hacked and license plates and images of travelers were leaked onto the dark web. Last year, the U.S. government’s Cybersecurity and Infrastructure Security Agency put out a warning saying that Motorola’s Vigilant ALPR cameras were remotely exploitable. 

Brown said that, although not all Motorola ALPRs are streaming to the internet, the security problems he found are deeply concerning and it’s not likely that ALPR security is something that’s going to suddenly be fixed.

“Let’s say the police or Motorola were like ‘Oh crap, we shouldn’t have put those on the public internet.’ They can clean that up,” he said. “But you still have a super vulnerable device that if you gain access to their network you can see the data. When you deploy the technology into the field, attacks always get easier, they don’t get harder.”

Facebook Deletes Internal Employee Criticism of New Board Member Dana White

7 January 2025 at 08:16
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Facebook Deletes Internal Employee Criticism of New Board Member Dana White

Meta’s HR team is deleting internal employee criticism of new board member, UFC president and CEO Dana White, at the same time that CEO Mark Zuckerberg announced to the world that Meta will “get back to our roots around free expression,” 404 Media has learned. Some employee posts questioning why criticism of White is being deleted are also being deleted. 

Monday, Zuckerberg made a post on a platform for Meta employees called Workplace announcing that Meta is adding Dana White, John Elkann, and Charlie Songhurst to the company’s board of directors (Zuckerberg’s post on Workplace was identical to his public announcement). Employee response to this was mixed, according to screenshots of the thread obtained by 404 Media. Some posted positive or joking comments: “Major W,” one employee posted. “We hire Connor [McGregor] next for after work sparring?,” another said. “Joe Rogan may be next,” a third said. A fourth simply said “LOL.”

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

But other employees criticized the decision and raised the point that there is video of White slapping his wife in a nightclub; White was not arrested and was not suspended from UFC for the domestic violence incident. McGregor, one of the most famous UFC fighters of all time, was held liable for sexual assault and was ordered by a civil court to pay $260,000 to a woman who accused him of raping her in 2018. McGregor is appealing the decision

“Kind of disheartening to see people in the comments celebrating a man who is on video assaulting his wife and another who was recently convicted of rape,” one employee commented, referring to White and McGregor. “I can kind of excuse individuals for being unaware, but Meta surely did their due diligence on White and concluded that what he did is fine. I feel like I’m on another planet,” another employee commented. “We have completely lost the plot,” a third said. 

Several posts critical of White were deleted by Meta’s “Internal Community Relations team” as violating a set of rules called the “Community Engagement Expectations,” which govern internal employee communications. In the thread, the Internal Community Relations team member explained why they were deleting content: “I’m posting a comment here with a reminder about the CEE, as multiple comments have been flagged by the community for review. It’s important that we maintain a respectful work environment where people can do their best work. We need to keep in mind that the CEE applies to how we communicate with and about members of our community—including members of our Board. Insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.” In 2022, Meta banned employees from discussing “very disruptive” topics.

Instagram Begins Randomly Showing Users AI-Generated Images of Themselves

6 January 2025 at 15:14
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Instagram Begins Randomly Showing Users AI-Generated Images of Themselves

Instagram has begun testing a feature in which Meta’s AI will automatically generate images of users in various situations and put them into that user’s feed. One Redditor posted over the weekend that they were scrolling through Instagram and were presented an AI-generated slideshow of themselves standing in front of “an endless maze of mirrors,” for example. 

“Used Meta AI to edit a selfie, now Instagram is using my face on ads targeted at me,” the person posted. The user was shown a slideshow of AI-generated images in which an AI version of himself is standing in front of an endless “mirror maze.” “Imagined for you: Mirror maze,” the “location of the post reads.”

“Imagine yourself reflecting on life in an endless maze of mirrors where you’re the main focus,” the caption of the AI images say. The Reddit user told 404 Media that at one point he had uploaded selfies of himself into Instagram’s “Imagine” feature, which is Meta’s AI image generation feature. 

💡
Do you work at Meta? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702.

People on Reddit initially did not even believe that these were real, with people posting things like "it's a fake story," and "I doubt that this is true," "this is a straight up lie lol," and "why would they do this?" The Redditor has repeatedly had to explain that, yes, this did happen. "I don’t really have a reason to fake this, I posted screenshots on another thread," he said. 404 Media sent the link to the Reddit post directly to Meta who confirmed that it is real, but not an "ad."

Instagram Begins Randomly Showing Users AI-Generated Images of Themselves
Instagram Begins Randomly Showing Users AI-Generated Images of Themselves

Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook

3 January 2025 at 09:10
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook

Earlier this week, Meta executive Connor Hayes told the Financial Times that the company is going to roll out AI character profiles on Instagram and Facebook that “exist on our platforms, kind of in the same way that accounts do … they’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.” 

This quote got a lot of attention because it was yet another signal that a “social network” ostensibly made up of human beings and designed for humans to connect with each other is once again betting its future on distinctly inhuman bots designed with the express purpose to pollute its platforms with AI-generated slop, just like spammers are already doing and just like Mark Zuckerberg recently told investors the explicit plan is. In the immediate aftermath of the Financial Times story, people began to notice the exact types of profiles that Hayes was talking about, and assumed that Meta had begun enacting its plan. 

But the Meta controlled, AI-generated Instagram and Facebook profiles going viral right now have been on the platform for well over a year and all of them stopped posting 10 months ago after users almost universally ignored them. Many of the AI-generated profiles that Meta created and announced have been fully deleted; the ones that remain have not posted new content since April 2024, though their chat functionality continues to work. 

Peoples’ understandable aversion to the idea of Meta-controlled AI bots taking up space on Facebook and Instagram has led them to believe that these existing bots are the new ones “announced” by Hayes to the Financial Times. In Hayes’ quote, he says that Meta ultimately envisions releasing tools that allow users to create these characters and profiles, and for those AI profiles to live alongside normal profiles. So Meta has not actually released anything new, but the news cycle has led people to go find Meta’s already existing AI-generated profiles and to realize how utterly terrible they are.

Meta's AI Profiles Are Indistinguishable From Terrible Spam That Took Over Facebook

After this article was originally published, Liz Sweeney, a Meta spokesperson, told 404 Media that "there is confusion" on the internet between what Hayes told the Financial Times and what is being talked about online now and Meta is deleting those accounts now. 404 Media confirmed that many of the profiles that were live at the time this article was published have since been deleted.

Elon Musk Uses Cybertruck Explosion to Show Tesla Can Remotely Unlock and Monitor Vehicles

2 January 2025 at 09:43
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Elon Musk Uses Cybertruck Explosion to Show Tesla Can Remotely Unlock and Monitor Vehicles

Capabilities used in or justified by extreme circumstances often become commonplace and are used for much more mundane things in the future. And so the remote investigative actions taken by Elon Musk in Wednesday’s Cybertruck explosion in Las Vegas are a warning and a reminder that Tesla owners do not actually own their Teslas, and that cars, broadly speaking, are increasingly spying on their owners and the people around them.

After the Cybertruck explosion outside of the Trump International Hotel in Vegas on Wednesday, Elon Musk remotely unlocked the Cybertruck for law enforcement and provided video from charging stations that the truck had visited to track the vehicle’s location, according to information released by law enforcement. 

“We have to thank Elon Musk specifically, he gave us quite a bit of additional information in regards to—the vehicle was locked due to the nature of the force from the explosion, as well as being able to capture all of the video from Tesla charging stations across the country, he sent that directly to us, so I appreciate his help on that,” Clark County Police sheriff Kevin McKahill said in a press conference.  

The fact that the CEO of a car company or someone working on his behalf can—and did—remotely unlock a specific vehicle and has the means of tracking its location as well as what Musk described as the vehicle’s “telemetry” is not surprising given everything we have learned about newer vehicles and Teslas in particular. But it is a stark reminder that while you may be able to drive your car, you increasingly do not own it, that the company that manufactured it can inject themselves into the experience whenever it wants, and that information from your private vehicle can be provided to law enforcement. Though Musk is being thanked directly by law enforcement, it is not clear whether Musk himself is performing these actions or whether he’s directing Tesla employees to  do so, but Tesla having and using these powers is concerning regardless of who is doing it.

How (and Why) a Reverse Engineer 3D-Printed an iPhone

2 January 2025 at 06:59
How (and Why) a Reverse Engineer 3D-Printed an iPhone

Reverse engineer Scotty Allen made his own iPhone. Well, more accurately, he made his own iPhone enclosure out of a block of aluminum, put the internal components of an iPhone into it, and managed to make it all work. Then, he used the same schematics he made to 3D print a working iPhone enclosure out of nylon carbon fiber.

Like lots of repair and DIY projects on the iPhone, Allen did tons of painstaking work over the course of a year to more or less recreate something that already exists and that most people do not need. But his work opens the door to a more modifiable iPhone and a DIY culture around smartphones that still doesn’t really exist. Essentially, he took a block of aluminum, used a CNC mill to carve it down, and was able to put all of the components in the custom enclosure, the way someone might when they’re building a PC. Along the way, he made CAD files of the inside of an iPhone, which will allow people to recreate his work. It is now possible to download his design files and 3D print your own iPhone shell.

“There's been an open question for me from the beginning, which is like, we have this culture around modding PCs, right? And custom PCs. Why do we not have that for phones?,” Allen told me in a video chat. “It’s very celebrated in the culture. But then when you talk about building your own phone, everyone is like, ‘No, that’s crazy.’ Apple is going to sue you.”

As you might expect, the iPhone’s enclosure is not just an empty block of metal. There are various tiny holes for screws and engraved areas for cables and antennas to go. Allen studied all of this and, through a roughly year-long process of trial-and-error, was able to recreate this enclosure and create blueprints for other people to replicate it. 

“This was really difficult because I had to reverse engineer it and there was a lot of time spent figuring out, ‘OK, now I’ve got it drawn, but how do I know everything about the interior walls? There’s all these little threaded inserts that are glued in. And I think I actually went about this machining it in a different way than Apple does,” he said.

Over the years, Allen has done lots of cool things with the iPhone, which started with adding a working headphone jack back into the iPhone 7 after Apple removed it. He is part of the right to repair movement, but takes things a step further and says he’s advocating for the right to modify, and the normalization of opening and tweaking things like the iPhone to prove that they’re not just unknowable black boxes.

“I look at this as an infrastructure project, which is, let’s make a 1-to-1 copy,” he said. “It’s not totally 1-to-1, but in terms of overall geometry, it’s a fairly faithful representation and reproduction with the goal that, if you want to do interesting things, you need to start with the boring things first. And now with all the design files that I’ve created, you can really easily begin to modify it to look how you want on the outside, to add space for things on the inside. So this is a base for doing all sorts of more creative things.” 

Allen said that at the moment one challenge is that there are limitations on the types of touchscreens that will work with the iPhone, but that with more work it would be possible to work with additional modifications. 

“My notion is that a phone is a device that you can open up and tinker with, or at least repair,” he said. “I think I’ve had a hand in saying, ‘Look, this isn’t a black box that only Apple is allowed to open.’ … I think consistently what I’ve done is poke at the edges and say, ‘What other things can we do with this?’” 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

19 December 2024 at 08:11
Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

An entity claiming to be United Healthcare is sending bogus copyright claims to internet platforms to get Luigi Mangione fan art taken off the internet, according to the print-on-demand merch retailer TeePublic. An independent journalist was hit with a copyright takedown demand over an image of Luigi Mangione and his family she posted on Bluesky, and other DMCA takedown requests posted to an open database and viewed by 404 Media show copyright claims trying to get “Deny, Defend, Depose” and Luigi Mangione-related merch taken off the internet, though it is unclear who is filing them.

Artist Rachel Kenaston was selling merch with the following design on TeePublic, a print-on-demand shop: 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
Image: Rachel Kenaston

She got an email from TeePublic that said “We're sorry to inform you that an intellectual property claim has been filed by UnitedHealth Group Inc against this design of yours on TeePublic,” and said “Unfortunately, we have no say in which designs stay or go” because of the DMCA. This is not true—platforms are able to assess the validity of any DMCA claim and can decide whether to take the supposedly infringing content down or not. But most platforms choose the path of least resistance and take down content that is obviously not infringing; Kenaston’s clearly violates no one’s copyright. Kenaston appealed the decision and TeePublic told her: “Unfortunately, this was a valid takedown notice sent to us by the proper rightsholder, so we are not allowed to dispute it,” which, again, is not true.

The threat was framed as a “DMCA Takedown Request.” The DMCA is the Digital Millennium Copyright Act, an incredibly important copyright law that governs most copyright law on the internet. Copyright law is complicated, but, basically, DMCA takedowns are filed to give notice to a social media platform, search engine, or website owner to inform them that something they are hosting or pointing to is copyrighted, and then, all too often, the social media platform will take the content down without much of a review in hopes of avoiding being being sued.

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
The takedown email Kenaston got from TeePublic

“It's not unusual for large companies to troll print-on-demand sites and shut down designs in an effort to scare/intimidate artists, it's happened to me before and it works!,” Kenaston told 404 Media in an email. “The same thing seems to be happening with UnitedHealth - there's no way they own the rights to the security footage of Luigi smiling (and if they do.... wtf.... seems like the public should know that) but since they made a complaint my design has been removed from the site and even if we went to court and I won I'm unsure whether TeePublic would ever put the design back up. So basically, if UnitedHealth's goal is to eliminate Luigi merch from print-on-demand sites, this is an effective strategy that's clearly working for them.”

💡
Do you know anything else about copyfraud or DMCA abuse? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].

There is no world in which the copyright of a watercolor painting of Luigi Mangione surveillance footage done by Kenaston is owned by United Health Group as it quite literally has nothing to do with anything that the company owns. It is illegal to file a DMCA unless you have a “good faith” belief that you are the rights holder (or are representing the rights holder) of the material in question. 

“What is the circumstance under which United Healthcare might come to own the copyright to a watercolor painting of the guy who assassinated their CEO?” tech rights expert and science fiction author Cory Doctorow told 404 Media in a phone call. “It’s just like, it’s hard to imagine” a lawyer thinking that, he added, saying that it’s an example of “copyfraud.”  

United Healthcare did not respond to multiple requests for comment, and TeePublic also did not respond to a request for comment. It is theoretically possible that another entity impersonated United Healthcare to request the removal because copyfraud in general is so common

But Kenaston’s work is not the only United Healthcare or Luigi Mangione-themed artwork on the internet that has been hit with bogus DMCA takedowns in recent days. Several platforms publish the DMCA takedown requests they get on the Lumen Database, which is a repository of DMCA takedowns. 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
A screenshot from Lumen Database of a takedown request

On December 7, someone named Samantha Montoya filed a DMCA takedown with Google that targeted eight websites selling “Deny, Defend, Depose” merch that uses elements of the United Healthcare logo. Montoya’s DMCA is very sparse, according to the copy posted on Lumen: “The logo consists of a half ellipse with two arches matches the contour of the ellipse. Each ellipse is the beginning of the words Deny, Defend, Depose which are stacked to the right. Our logo comes in multiple colors.” 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

Medium, one of the targeted websites, has deleted the page that the merch was hosted on. It is not clear from the DMCA whether the person filing this is associated with United Healthcare, or whether they are associated with deny-defend-depose.com and are filing against copycats. Deny-defend-depose.com did not respond to a request for comment. Similarly, a DMCA takedown filed by someone named Manh Nguyen targets a handful of “Deny, Defend, Depose” and Luigi Mangione-themed t-shirts on a website called Printiment.com.

Based on the information on Lumen Database, there is unfortunately no way to figure out who Samantha Montoya or Manh Nguyen are associated with or working on behalf of.

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
One of the shirts targeted by Manh Nguyen's DMCA

Not Just Fan Art 

Over the weekend, a lawyer demanded that independent journalist Marisa Kabas take down an image of Luigi Mangione and his family that she posted to Bluesky, which was originally posted on the campaign website of Maryland assemblymember Nino Mangione. 

The lawyer, Desiree Moore, said she was “acting on behalf of our client, the Doe Family,” and claimed that “the use of this photograph is not authorized by the copyright owner and is not otherwise permitted by law.” 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
The email Kabas got

Moore said that Nino Mangione’s website “does not in fact display the photograph,” even though the Wayback Machine shows that it obviously did display the image. In a follow-up email to Kabas, Moore said “the owner of the photograph has not authorized anyone to publish, disseminate, or otherwise use the photograph for any purpose, and the photograph has been removed from various digital platforms as a result,” which suggests that other websites have also been threatened with takedown requests. Moore also said that her “client seeks to remain anonymous” and that “the photograph is hardly newsworthy.” The New York Post also published the image, and blurred versions of the image remain on its website. The New York Post did not respond to a request for comment. Kabas deleted her Bluesky post “to avoid any further threats,” she said. 

“It feels like a harbinger of things to come, coming directly after journalists for something as small as a social media post,” Kabas, who runs the excellent independent site The Handbasket, told 404 Media in a video chat. “They might be coming after small, independent publishers because they know we don’t have the money for a large legal defense, and they’re gonna make an example out of us, and they’re going to say that if you try anything funny, we’re going to try to bankrupt you through a frivolous lawsuit.” 

The takedown request to Kabas in particular is notable for a few reasons. First, it shows that the Mangione family or someone associated with it is using the prospect of a copyright lawsuit to threaten journalists for reporting on one of the most important stories of the year, which is particularly concerning in an atmosphere where journalists are increasingly being targeted by politicians and the powerful. But it’s also notable that the threat was sent directly to Kabas for something she posted on Bluesky, rather than being sent to Bluesky itself. (Bluesky did not respond to a request for comment for this story, and we don’t know if Bluesky also received a takedown request about Kabas’s post.)

Sometimes for better, but mostly for worse, social media platforms have long served as a layer between their users and copyright holders (and their lawyers). YouTube deals with huge numbers of takedown requests filed under the Digital Millennium Copyright Act. But to avoid DMCA headaches, it has also set up automated tools such as ContentID and other algorithmic copyright checks that allow copyright holders to essentially claim ownership of—and monetization rights to—supposedly copyrighted material that users upload without invoking the DMCA. YouTube and other social media platforms have also infamously set up “copy strike” systems, where people can have their channels demonetized, downranked in the algorithm, or deleted outright if rights holders claim a post or video violates their copyright or if an automated algorithm does.

This layer between copyright holders and social media users has created all kinds of bad situations where social media platforms overzealously enforce against content that may be OK to use under fair use provisions or where someone who does not own the copyright at all abuses the system to get content they don’t like taken down, which is what happened to Kenaston.

Copyright takedown processes under social media companies almost always err on the side of copyright holders, which is a problem. On the other hand, because social media companies are usually the ones receiving DMCAs or otherwise dealing with copyright, individual social media users do not usually have to deal directly with lawyers who are threatening them for something they tweeted, uploaded to YouTube, or posted on Bluesky. 

There is a long history of powerful people and companies abusing copyright law to get reporting or posts they don’t like taken off the internet. But very often, these attempts backfire as the rightsholder ends up Streisand Effecting themselves. But in recent weeks, independent journalists have been getting these DMCA takedown requests—which are explicit legal threats—directly. A “reputation management company” tried to bribe Molly White, who runs Web3IsGoingGreat and Citation Needed, to delete a tweet and a post about the arrest of Roman Ziemian, the cofounder of FutureNet, for an alleged crypto fraud. When the bribe didn’t work because White is a good journalist who doesn’t take bribes, she was hit with a frivolous DMCA claim, which she wrote about here.

These sorts of threats do happen from time to time, but the fact that several notable ones have happened in quick succession before Trump takes office is notable considering that Trump himself said earlier this week that he feels emboldened by the fact that ABC settled a libel lawsuit with him after agreeing to pay him a total of $16 million. That case—in which George Stephanopoulos said that Trump was found civilly liable of “rape” rather than of “sexual assault”—has scared the shit out of media companies. 

This is because libel cases for public figures consider whether that person’s reputation was actually harmed, whether the news outlet acted with “actual malice,” rather than just negligence, and the severity of the harm inflicted. Considering Trump is the most public of public figures, that he still won the presidency, and that a jury did find him liable for a “sexual assault,” this is a terrible kowtowing to power that sets a horrible precedent. 

Trump’s case with ABC isn’t exactly related to a DMCA takedown filed over a Bluesky post, but they’re both happening in an atmosphere in which powerful people feel empowered to target journalists. 

“There’s also the Kash Patel of it all. They’re very openly talking about coming after journalists. It’s not hypothetical,” Kabas said, referring to Trump’s pick to lead the FBI. “I think that because the new administration hasn’t started yet, we don’t know for sure what that’s going to look like,” she said. “But we’re starting to get a taste of what it might be like.”  

What’s happening to Kabas and Kenaston highlights how screwed up the internet is, and how rampant DMCA abuse is. Transparency databases like Lumen help a lot, but it’s still possible to obscure where any given takedown request is coming from, and platforms like TeePublic do not post full DMCAs. 

❌
❌