Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

19 December 2024 at 08:11
Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

An entity claiming to be United Healthcare is sending bogus copyright claims to internet platforms to get Luigi Mangione fan art taken off the internet, according to the print-on-demand merch retailer TeePublic. An independent journalist was hit with a copyright takedown demand over an image of Luigi Mangione and his family she posted on Bluesky, and other DMCA takedown requests posted to an open database and viewed by 404 Media show copyright claims trying to get “Deny, Defend, Depose” and Luigi Mangione-related merch taken off the internet, though it is unclear who is filing them.

Artist Rachel Kenaston was selling merch with the following design on TeePublic, a print-on-demand shop: 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
Image: Rachel Kenaston

She got an email from TeePublic that said “We're sorry to inform you that an intellectual property claim has been filed by UnitedHealth Group Inc against this design of yours on TeePublic,” and said “Unfortunately, we have no say in which designs stay or go” because of the DMCA. This is not true—platforms are able to assess the validity of any DMCA claim and can decide whether to take the supposedly infringing content down or not. But most platforms choose the path of least resistance and take down content that is obviously not infringing; Kenaston’s clearly violates no one’s copyright. Kenaston appealed the decision and TeePublic told her: “Unfortunately, this was a valid takedown notice sent to us by the proper rightsholder, so we are not allowed to dispute it,” which, again, is not true.

The threat was framed as a “DMCA Takedown Request.” The DMCA is the Digital Millennium Copyright Act, an incredibly important copyright law that governs most copyright law on the internet. Copyright law is complicated, but, basically, DMCA takedowns are filed to give notice to a social media platform, search engine, or website owner to inform them that something they are hosting or pointing to is copyrighted, and then, all too often, the social media platform will take the content down without much of a review in hopes of avoiding being being sued.

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
The takedown email Kenaston got from TeePublic

“It's not unusual for large companies to troll print-on-demand sites and shut down designs in an effort to scare/intimidate artists, it's happened to me before and it works!,” Kenaston told 404 Media in an email. “The same thing seems to be happening with UnitedHealth - there's no way they own the rights to the security footage of Luigi smiling (and if they do.... wtf.... seems like the public should know that) but since they made a complaint my design has been removed from the site and even if we went to court and I won I'm unsure whether TeePublic would ever put the design back up. So basically, if UnitedHealth's goal is to eliminate Luigi merch from print-on-demand sites, this is an effective strategy that's clearly working for them.”

💡
Do you know anything else about copyfraud or DMCA abuse? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].

There is no world in which the copyright of a watercolor painting of Luigi Mangione surveillance footage done by Kenaston is owned by United Health Group as it quite literally has nothing to do with anything that the company owns. It is illegal to file a DMCA unless you have a “good faith” belief that you are the rights holder (or are representing the rights holder) of the material in question. 

“What is the circumstance under which United Healthcare might come to own the copyright to a watercolor painting of the guy who assassinated their CEO?” tech rights expert and science fiction author Cory Doctorow told 404 Media in a phone call. “It’s just like, it’s hard to imagine” a lawyer thinking that, he added, saying that it’s an example of “copyfraud.”  

United Healthcare did not respond to multiple requests for comment, and TeePublic also did not respond to a request for comment. It is theoretically possible that another entity impersonated United Healthcare to request the removal because copyfraud in general is so common

But Kenaston’s work is not the only United Healthcare or Luigi Mangione-themed artwork on the internet that has been hit with bogus DMCA takedowns in recent days. Several platforms publish the DMCA takedown requests they get on the Lumen Database, which is a repository of DMCA takedowns. 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
A screenshot from Lumen Database of a takedown request

On December 7, someone named Samantha Montoya filed a DMCA takedown with Google that targeted eight websites selling “Deny, Defend, Depose” merch that uses elements of the United Healthcare logo. Montoya’s DMCA is very sparse, according to the copy posted on Lumen: “The logo consists of a half ellipse with two arches matches the contour of the ellipse. Each ellipse is the beginning of the words Deny, Defend, Depose which are stacked to the right. Our logo comes in multiple colors.” 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet

Medium, one of the targeted websites, has deleted the page that the merch was hosted on. It is not clear from the DMCA whether the person filing this is associated with United Healthcare, or whether they are associated with deny-defend-depose.com and are filing against copycats. Deny-defend-depose.com did not respond to a request for comment. Similarly, a DMCA takedown filed by someone named Manh Nguyen targets a handful of “Deny, Defend, Depose” and Luigi Mangione-themed t-shirts on a website called Printiment.com.

Based on the information on Lumen Database, there is unfortunately no way to figure out who Samantha Montoya or Manh Nguyen are associated with or working on behalf of.

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
One of the shirts targeted by Manh Nguyen's DMCA

Not Just Fan Art 

Over the weekend, a lawyer demanded that independent journalist Marisa Kabas take down an image of Luigi Mangione and his family that she posted to Bluesky, which was originally posted on the campaign website of Maryland assemblymember Nino Mangione. 

The lawyer, Desiree Moore, said she was “acting on behalf of our client, the Doe Family,” and claimed that “the use of this photograph is not authorized by the copyright owner and is not otherwise permitted by law.” 

Copyright Abuse Is Getting Luigi Mangione Merch Removed From the Internet
The email Kabas got

Moore said that Nino Mangione’s website “does not in fact display the photograph,” even though the Wayback Machine shows that it obviously did display the image. In a follow-up email to Kabas, Moore said “the owner of the photograph has not authorized anyone to publish, disseminate, or otherwise use the photograph for any purpose, and the photograph has been removed from various digital platforms as a result,” which suggests that other websites have also been threatened with takedown requests. Moore also said that her “client seeks to remain anonymous” and that “the photograph is hardly newsworthy.” The New York Post also published the image, and blurred versions of the image remain on its website. The New York Post did not respond to a request for comment. Kabas deleted her Bluesky post “to avoid any further threats,” she said. 

“It feels like a harbinger of things to come, coming directly after journalists for something as small as a social media post,” Kabas, who runs the excellent independent site The Handbasket, told 404 Media in a video chat. “They might be coming after small, independent publishers because they know we don’t have the money for a large legal defense, and they’re gonna make an example out of us, and they’re going to say that if you try anything funny, we’re going to try to bankrupt you through a frivolous lawsuit.” 

The takedown request to Kabas in particular is notable for a few reasons. First, it shows that the Mangione family or someone associated with it is using the prospect of a copyright lawsuit to threaten journalists for reporting on one of the most important stories of the year, which is particularly concerning in an atmosphere where journalists are increasingly being targeted by politicians and the powerful. But it’s also notable that the threat was sent directly to Kabas for something she posted on Bluesky, rather than being sent to Bluesky itself. (Bluesky did not respond to a request for comment for this story, and we don’t know if Bluesky also received a takedown request about Kabas’s post.)

Sometimes for better, but mostly for worse, social media platforms have long served as a layer between their users and copyright holders (and their lawyers). YouTube deals with huge numbers of takedown requests filed under the Digital Millennium Copyright Act. But to avoid DMCA headaches, it has also set up automated tools such as ContentID and other algorithmic copyright checks that allow copyright holders to essentially claim ownership of—and monetization rights to—supposedly copyrighted material that users upload without invoking the DMCA. YouTube and other social media platforms have also infamously set up “copy strike” systems, where people can have their channels demonetized, downranked in the algorithm, or deleted outright if rights holders claim a post or video violates their copyright or if an automated algorithm does.

This layer between copyright holders and social media users has created all kinds of bad situations where social media platforms overzealously enforce against content that may be OK to use under fair use provisions or where someone who does not own the copyright at all abuses the system to get content they don’t like taken down, which is what happened to Kenaston.

Copyright takedown processes under social media companies almost always err on the side of copyright holders, which is a problem. On the other hand, because social media companies are usually the ones receiving DMCAs or otherwise dealing with copyright, individual social media users do not usually have to deal directly with lawyers who are threatening them for something they tweeted, uploaded to YouTube, or posted on Bluesky. 

There is a long history of powerful people and companies abusing copyright law to get reporting or posts they don’t like taken off the internet. But very often, these attempts backfire as the rightsholder ends up Streisand Effecting themselves. But in recent weeks, independent journalists have been getting these DMCA takedown requests—which are explicit legal threats—directly. A “reputation management company” tried to bribe Molly White, who runs Web3IsGoingGreat and Citation Needed, to delete a tweet and a post about the arrest of Roman Ziemian, the cofounder of FutureNet, for an alleged crypto fraud. When the bribe didn’t work because White is a good journalist who doesn’t take bribes, she was hit with a frivolous DMCA claim, which she wrote about here.

These sorts of threats do happen from time to time, but the fact that several notable ones have happened in quick succession before Trump takes office is notable considering that Trump himself said earlier this week that he feels emboldened by the fact that ABC settled a libel lawsuit with him after agreeing to pay him a total of $16 million. That case—in which George Stephanopoulos said that Trump was found civilly liable of “rape” rather than of “sexual assault”—has scared the shit out of media companies. 

This is because libel cases for public figures consider whether that person’s reputation was actually harmed, whether the news outlet acted with “actual malice,” rather than just negligence, and the severity of the harm inflicted. Considering Trump is the most public of public figures, that he still won the presidency, and that a jury did find him liable for a “sexual assault,” this is a terrible kowtowing to power that sets a horrible precedent. 

Trump’s case with ABC isn’t exactly related to a DMCA takedown filed over a Bluesky post, but they’re both happening in an atmosphere in which powerful people feel empowered to target journalists. 

“There’s also the Kash Patel of it all. They’re very openly talking about coming after journalists. It’s not hypothetical,” Kabas said, referring to Trump’s pick to lead the FBI. “I think that because the new administration hasn’t started yet, we don’t know for sure what that’s going to look like,” she said. “But we’re starting to get a taste of what it might be like.”  

What’s happening to Kabas and Kenaston highlights how screwed up the internet is, and how rampant DMCA abuse is. Transparency databases like Lumen help a lot, but it’s still possible to obscure where any given takedown request is coming from, and platforms like TeePublic do not post full DMCAs. 

“We Are Getting Lasered”: Nearly a Dozen Planes Lasered Last Night During New Jersey Drone Panic

18 December 2024 at 16:12
“We Are Getting Lasered”: Nearly a Dozen Planes Lasered Last Night During New Jersey Drone Panic

Tuesday night, the pilots of at least 11 commercial planes flying into New York City-area airports reported having lasers from the ground shined at their aircraft, including in some cases their cockpits, according to an analysis of air traffic control audio obtained by 404 Media. In some of the audio, pilots can be heard saying the lasers are “definitely directed straight at us,” that the lasers “are tracking us,” and, at one point air traffic control says “yep, we’ve been getting them all night, like literally 30 of them.” 

The air traffic control recordings, which come from Newark Airport in New Jersey and JFK Airport in New York City, suggest that people in New Jersey are shining powerful lasers at passenger airplanes during one of the busiest travel times of the year amid politician- and media-stoked panic about “mystery drones” in New Jersey. The FBI warned people in New Jersey Tuesday not to shoot at drones or shine lasers at them. A military pilot flying over New Jersey also said he was injured by a laser earlier this week. The air traffic control analysis shared with 404 Media was done by John Wiseman, whose work analyzing open-source flight data has previously uncovered secret FBI surveillance programs. His analysis suggests that people blasting “drones” with lasers is not some theoretical issue, but instead could cause real disruption or harm to commercial pilots. 

“Getting lasered about two miles up, our right hand side, our present position,” the pilot of American Airlines flight 586, a flight from Chicago to Newark, said. 

“Okay, yep, we’ve been getting them all night, like literally 30 of them,” air traffic control responds. “Do you know what color it was?” 

“We Are Getting Lasered”: Nearly a Dozen Planes Lasered Last Night During New Jersey Drone Panic
ATC Audio 1
0:00
/74.81469387755102

“Green and they are tracking us,” the pilot of American Airlines 586 says. 

The No-Win 'Mystery Drone' Clusterfuck

18 December 2024 at 06:43
The No-Win 'Mystery Drone' Clusterfuck

If you are wondering what to think of the New Jersey mystery drone situation, it is this: AHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHhHhhhhhHHHHHHhhH. 

Last week, I wrote at length that the mystery drones in New Jersey are almost definitely a mass delusion caused by a bunch of people who don’t know what they’re talking about looking at the sky and reporting manned aircraft and hobbyist drones as being something anomalous. I said this because we have seen this pattern of drone reports before, and this is exactly what has happened in those instances. Monday evening, a group of federal agencies including the Department of Homeland Security, the FBI, the Federal Aviation Administration, and the Department of Defense issued a joint statement telling everyone to please calm down. 

“Having closely examined the technical data and tips from concerned citizens, we assess that the sightings to date include a combination of lawful commercial drones, hobbyist drones, and law enforcement drones, as well as manned fixed-wing aircraft, helicopters, and stars mistakenly reported as drones,” the statement reads. “We have not identified anything anomalous and do not assess the activity to date to present a national security or public safety risk over the civilian airspace in New Jersey or other states in the northeast.”

And yet the New Jersey drone story will not go away and has only gotten worse. Opportunistic politicians are stoking mass panic to cynically raise their profile and to get themselves booked on national cable news channels and perpetuate the panic cycle. The fact that the government is telling people there is no conspiracy is, to a certain set of politicians, itself a conspiracy. 

Thanks to @lauraingle and @NewsNation for helping us to voice our concerns re: what the federal government won’t accurately acknowledge: drones are invading New Jersey skies, and their silence speaks volumes. Are we on our own here? #SkySpies #FederalFailure #NoResponseNoTrustpic.twitter.com/SZDKGHqjnL

— Dawn Fantasia (@DawnFantasia_NJ) December 15, 2024

All of this has become a no-win clusterfuck for everyone except the attention seeking grifters within the government who are themselves railing against the government to focus attention on themselves. To these people, government inaction is unacceptable, and government actions and explanations cannot be trusted. Meanwhile, regular-ass-people on the internet have debunked many viral images and videos of “drones” by cross-referencing them with known flight patterns of actual planes or have been able to identify what the “mystery” drones are by comparing lights on the “drones” to lights on known models of manned aircraft

WTF Is Going on With the New Jersey Mystery Drones? Maybe Mass Panic Over Nothing
The New Jersey drone situation is very interesting. We’ve also seen this story before.
The No-Win 'Mystery Drone' Clusterfuck404 MediaJason Koebler
The No-Win 'Mystery Drone' Clusterfuck

This has led to predictable outcomes such as random people in New Jersey shining laser pointers and (possibly shooting guns?) at passenger planes, which is very dangerous.

It is impossible to keep up with every Politician Who Should Know Better who has said something stupid, but Rolling Stone and Defector both have worthwhile rundowns of what has been going on the last few days. 

We have reached Marjorie Taylor Green-is-personally-threatening-to-shoot-down-the-drones levels of insanity. Former Maryland governor and failed Senate candidate Larry Hogan tweeted a viral picture of Orion’s Belt and called it a drone. January 6 attendee, QAnon booster, and Pennsylvania State Senator Doug Mastriano, who can regularly be relied on to make any crisis worse by contributing his dumbassery, tweeted an image of TIE Fighter replica from Star Wars that has been regularly used in memes for nearly two years and said “It is inconceivable that the federal government has no answers nor has taken any action to get to the bottom of the unidentified drones.” He got Community Noted, then followed this up with a post saying this was a joke and used it as a commentary on the modern state of journalism

A couple of nights ago we were out on Long Beach Island to film a video about the drone invasion over New Jersey. While we were filming two drones flew just a few hundred feet over our heads!

Governor Murphy has failed the people of New Jersey once again. The residents of New… pic.twitter.com/M9p5ZbUTeT

— Bill Spadea (@BillSpadea) December 15, 2024

Local politicians who fashion themselves as more seriously trying to help the people of New Jersey have also found themselves regularly getting booked on national cable TV shows and their tweets regularly going viral; Dawn Fantasia, a New Jersey assemblywoman who rose to prominence in the state as a principal running against the general concept of Woke, has done interviews on Fox, CNN, and News Nation. Kristen Cobo of Moms for Liberty, which is most famous for pushing schools to ban books and demonize LGBTQ+ students, filmed “approximately 8 suspected drones,” then talked about it in an interview on News Nation. New Jersey State Senator Douglas Steinhardt has said on CNN that the idea that these are manned aircraft is “insulting” and that we must “combat Washington DC gaslighting.” Gubernatorial candidate and AM talk radio host Bill Spadea bravely filmed a video on the side of the road that included drones and suggested that it “might be a foreign government” and suggested they should be shot down.

It is easy to look at social media posts from these folks and to roll one’s eyes and move on. As a reporter and someone who has covered drones endlessly I also find all of this absurdity kind of fun and a welcome distraction from all the other dystopian stuff we report on. But I know many people who live in New Jersey and have family there, and all of this is causing some level of undue panic. 

WTF Is Going on With the New Jersey Mystery Drones? Maybe Mass Panic Over Nothing

12 December 2024 at 10:51
WTF Is Going on With the New Jersey Mystery Drones? Maybe Mass Panic Over Nothing

The calls about the mystery drones lighting up the night sky were sporadic at first. Then they came daily, from all over the state. A multi-agency task force was convened. The FBI got involved. So did the military. The local news reported on a “band of large drones” hovering over the state that came out most nights. The sightings became national news. People theorized that they were classified government aircraft, or foreign spies. Some people wondered whether they were aliens.

This was not New Jersey this month, where drone sightings have caused a mass panic and involvement from local officials all the way up to the White House. It was Colorado in December, 2019 and January, 2020. Months passed, and Colorado’s mystery drones turned out not to be mysterious at all. Authorities eventually determined that some of the “drones” were SpaceX Starlink satellites. Others were regular passenger aircraft approaching the airport, and many “were visually confirmed to be hobbyist drones by law enforcement” and which were not breaking any laws. Some were absolutely nothing and were chalked up to people perceiving lights because of atmospheric conditions. In other cases, law enforcement started to fly their own drones to investigate the supposed mystery drones, creating the possibility of further “mystery drone” sightings, according to public records released after the initial mass panic.

It remains unclear what the “mystery drones” that are currently being seen above New Jersey and Staten Island actually are. But the pattern we are seeing in New Jersey right now is following the exact pattern we saw in Colorado in the winter of 2019 and that has been seen numerous times throughout human history when there are mass drone or mass UFO sightings. 

The drones have captured the public’s imagination, and the concern of local, state, and national politicians. In the last few days, the mayors of 21 different New Jersey towns wrote a letter to Gov. Phil Murphy demanding a full investigation and stating that “the lack of information and clarity regarding these operations has caused fear and frustration among our constituents.” The FBI is investigating, as is the Department of Homeland Security. New Jersey congressional representatives and senators are demanding answers. The Pentagon has said that the drones are not an Iranian “mothership,” despite what one lawmaker has claimed, and the White House says Joe Biden is aware of the situation. The story is everywhere: It is the talk of many of my group texts, is all over my social media feeds, and is being discussed by everyone I know who has even a passing connection to New Jersey. Conspiracy theorists, as you’d expect, are running wild with the story.

Again, we don’t know what these drones are right now, or if they are even drones at all. But in the past, this exact hype and fear cycle has played out, and, when the dust has settled, it has turned out that the “mystery drones” were neither mysterious nor drones. 

“I’ve been puzzling about the NJ drone stuff, and I think that it’s an interesting example of the latest form of mass public panics over mysterious aircraft—which have been happening since the time of the Ancient Greeks,” Faine Greenwood, who studies civilian drone activity, told 404 Media. “My best guess about what’s actually happening is some form of confidential US aerial testing or contractor testing is happening and the federal authorities are communicating very badly with each other and others. And then people heard about one or two sightings, and everybody starts seeing drones everywhere (much like UFOs). Quite a few people [are] posting videos that seem like normal flight patterns … There’s such a huge amount of confusion around normal non-drone stuff in the sky. People are remarkably bad at identifying objects in flight.”

(The Pentagon has denied that the drones are U.S. military, but the Pentagon has a long documented history of lying about such things to keep classified testing a secret).

Greenwood is right: Regular people, politicians, and even commercial pilots are remarkably bad at identifying exactly what things flying in the air actually are. In New Jersey, there have been many news stories that are based on politicians confidently saying that the drones are a specific size or act in a specific manner or have specific characteristics, which is exactly what happened in Colorado, and the vast majority of those initial stories were wildly incorrect.

“In a post on the social media platform X, the assemblywoman Dawn Fantasia described the drones as up to 6ft in diameter and sometimes traveling with their lights switched off,” the Guardian wrote. “The devices do not appear to be being flown by hobbyists, Fantasia wrote.” Fantasia did write this on X, in a post that is deeply unhinged that also called for “military intervention” and said “to state that there is no known or credible threat is incredibly misleading.” 

Greenwood wrote an article in 2019 that posited that “Drones are the new flying saucers,” which they said they believe still holds up in 2024. In 2015, I wrote an article called “Drones are the new UFOs” that, nearly a decade later, still feels relevant. That article was based on a Federal Aviation Administration (FAA) report based on reports from commercial airline pilots that showed in 2014 pilots reported 678 “drone sightings” and near misses. An analysis of that data by the Academy of Model Aeronautics showed that a huge number of these “drone sightings,” which, again, were reported by commercial pilots whose job is to monitor the sky while they’re flying, were not drones at all. Items classified as “drones” by pilots and the objects were “a balloon,” a “mini blimp,” a “large vulture,” and a “fast moving gray object.” Other objects initially classified as drones were later just deemed to be “UFOs.”

Loretta Alkalay, who worked at the FAA for 30 years and is now an attorney focusing on aviation law and drone consulting, told 404 Media that they may be U.S. government or military drones, because their appearance over bodies of water would make them safe to knock out of the sky without threatening people on the ground. (Again, the Pentagon has said that they are not military drones, but the military is not always forthcoming about such things and inter-agency communication about who is flying where and when is sometimes lacking). 

“I assume they’re government or military drones because otherwise why wouldn’t the government take them down?” Alkalay said. “The military and other agencies are authorized to use jamming technology to neutralize drone threats and many of these drones have been spotted over water where the risk of harm from a falling drone would be negligible.” The FAA has put up a no fly zone in the areas where the drones have been spotted, which syncs to geofences in many types of drones. New Jersey governor Phil Murphy says he wants the feds to shoot them down. Greenwood pointed out that “we do have remote ID systems that allow authorities to readily identify law-abiding drones, so blanket airspace restrictions are unnecessary and will only harm people abiding by the rules.”

In Colorado in 2020, authorities eventually said they “confirmed no incidents involving criminal activity, nor have investigations substantiated reports of suspicious or illegal drone activity.” In addition to SpaceX satellites being falsely reported as drones, 13 sightings ended up being “planets, stars, or small hobbyist drones.” Six of them were commercial planes reported as being drones. Additional public records obtained about similar drone sightings in Nebraska that became part of the Colorado scare discussed the concern of “space potatoes” being dropped from unidentified drones over farmland. It turned out that these were gel logs called SOILPAM, which are used by farmers to keep their irrigation systems from moving around in wet soil, and that farmers were dropping these from drones over their fields.

It should be noted that hobby and commercial drones are legal. And that many, many police departments and public agencies now have drones, and that many of them do a bad job of coordinating with other parts of the government about where and when they are flying. In Colorado, after hearing reports about mystery drones, government entities began flying their own drones to attempt to surveil the drones in the sky, and drone monitoring companies that uses drones to look for drones also swooped in. It was a self-perpetuating hysteria. 

For years, I worked on a Netflix documentary about UFO mass sightings called Encounters, and one thing that became clear from working on that documentary, which followed specific mass UFO sightings in Texas, Wales, Zimbabwe, and Japan, is that people don’t spend a lot of time looking at the sky until they have a reason to do so. News reports about UFOs or “mystery drones” cause more people to look to the sky, which begets more reports and more panic. Often, these sightings do have a straightforward explanation; there are lots of things that fly through our atmosphere or low Earth orbit that are allowed to be there and that are known that are suddenly being reported as anomalous. 

In Colorado, interest in the “mystery drones” disappeared as reports about the first cases of COVID-19 began in the United States. Media attention and public interest in the drones disappeared. And then so did the sightings. 

I Went to the Premiere of the First Commercially Streaming AI-Generated Movies

11 December 2024 at 07:42
I Went to the Premiere of the First Commercially Streaming AI-Generated Movies

Movies are supposed to transport you places. At the end of last month, I was sitting in the Chinese Theater, one of the most iconic movie theaters in Hollywood, in the same complex where the Oscars are held. And as I was watching the movie, I found myself transported to the past, thinking about one of my biggest regrets. When I was in high school, I went to a theater to watch a screening of a movie one of my classmates had made. I was 14 years old, and I reviewed it for the school newspaper. I savaged the film’s special effects, which were done by hand with love and care by someone my own age, and were lightyears better than anything I could do. I had no idea what I was talking about, how special effects were made, or how to review a movie. The student who made the film rightfully hated me, and I have felt bad about what I wrote ever since. 

So, 20 years later, I’m sitting in the Chinese Theater watching AI-generated movies in which the directors sometimes cannot make the characters consistently look the same, or make audio sync with lips in a natural-seeming way, and I am thinking about the emotions these films are giving me. The emotion that I feel most strongly is “guilt,” because I know there is no way to write about what I am watching without explaining that these are bad films, and I cannot believe that they are going to be imminently commercially released, and the people who made them are all sitting around me.

Then I remembered that I am not watching student films made with love by an enthusiastic high school student. I am watching films that were made for TCL, the largest TV manufacturer on Earth as part of a pilot program designed to normalize AI movies and TV shows for an audience that it plans to monetize explicitly with targeted advertising and whose internal data suggests that the people who watch its free television streaming network are too lazy to change the channel. I know this is the plan because TCL’s executives just told the audience that this is the plan.

I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Image: Jason Koebler

TCL said it expects to sell 43 million televisions this year. To augment the revenue from its TV sales, it has created a free TV service called TCL+, which is supported by targeted advertising. A few months ago, TCL announced the creation of the TCL Film Machine, which is a studio that is creating AI-generated films that will run on TCL+. TCL invited me to the TCL Chinese Theater, which it now owns, to watch the first five AI-generated films that will air on TCL+ starting this week.

Before airing the short, AI-generated films, Haohong Wang, the general manager of TCL Research America, gave a presentation in which he explained that TCL’s AI movie and TV strategy would be informed and funded by targeted advertising, and that its content will “create a flywheel effect funded by two forces, advertising and AI.” He then pulled up a slide that suggested AI-generated “free premium originals” would be a “new era” of filmmaking alongside the Silent Film era, the Golden Age of Hollywood, etc. 

I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Image: Jason Koebler

Catherine Zhang, TCL’s vice president of content services and partnerships, then explained to the audience that TCL’s streaming strategy is to “offer a lean-back binge-watching experience” in which content passively washes over the people watching it. “Data told us that our users don’t want to work that hard,” she said. “Half of them don’t even change the channel.”

“We believe that CTV [connected TV] is the new cable,” she said. “With premium original content, precise ad-targeting capability, and an AI-powered, innovative engaging viewing experience, TCL’s content service will continue its double-digit growth next year.”

I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Image: Jason Koebler

Starting December 12, TCL will air the five AI-generated shorts I watched on TCL+, the free, ad-supported streaming platform promoted on TCL TVs. These will be the first of many more AI-generated movies and TV shows created by TCL Film Machine and will live alongside “Next Stop Paris,” TCL’s AI-generated romcom whose trailer was dunked on by the internet

💡
Do you know anything else about AI-generated films or AI in the movie industry? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +1 202 505 1702. Otherwise, send me an email at [email protected].

The first film the audience watched at the Chinese Theater was called “The Slug,” and it is about a woman who has a disease that turns her into a slug. The second is called “The Audition,” and it is kind of like an SNL digital short where a real human actor goes into an acting audition and is asked to do increasingly outrageous reads which are accomplished by deepfaking him into ridiculous situations, culminating in the climax, which is putting the actor into a series of famous and copyrighted movie scenes. George Huang, the director of that film, said afterward he thought putting the actor’s face into iconic film scenes “would be the hardest thing, and it turned out to be the easiest,” which was perhaps an unknowing commentary on the fact that AI tools are surreptitiously trained on already-existing movies.

“Sun Day” was the most interesting and ambitious film and is a dystopian sci-fi where a girl on a rain planet wins a lottery to see the sun for the first time. Her spaceship explodes but she somehow lives. “Project Nexus” is a superhero film about a green rock that bestows superpowers on prisoners that did not have a plot I could follow and had no real ending. “The Best Day of My Life” is a mountaineering documentary in which a real man talks about an avalanche that nearly killed him and led to his leg being amputated, with his narrated story being animated with AI. 

All of these films are technically impressive if you have watched lots of AI-generated content, which I have. But they all suffer from the same problem that every other AI film, video, or image you have seen suffers from. The AI-generated people often have dead eyes, vacant expressions, and move unnaturally. Many of the directors chose to do narrative voiceovers for large parts of their films, which is almost certainly done because when the characters in these films do talk, the lip-synching and facial expression-syncing does not work well. Some dialogue is delivered with the camera pointing at the back of characters’ heads, presumably for the same reason. 

Text is often not properly rendered, leading to typos and English that bleeds into alien symbols. Picture frames on the wall of “The Slug” do not have discernible images in them. A close up on a label of a jar of bath salts in the movie reads “Lavendor Breeze Ogosé πy[followed by indecipherable characters].” Scenery and characters’ appearances sometimes change from scene to scene. Scenery is often blurry. When characters move across a shot they often move or glide in unreal ways. Characters give disjointed screams. In “The Slug,” there is a scene that looks very similar to some of the AI Will Smith pasta memes. In “The Best Day of My Life,” the place where the man being buried under an avalanche takes refuge changes from scene to scene and it seems like he is buried in a weird sludge half the time. In “Sun Day,” the only film that really tried to have back-and-forth dialogue between AI-generated characters, faces and lips move in ways that I struggle to explain but which you can see here:

These problems—which truly do affect a viewer’s ability to empathize with any character and are a problem that all AI-generated media to date faces—were explained away by the directors not as things that are distracting, but as creative choices. 

“On a traditional film set, the background would be the same, things would be the same [from scene to scene]. The wallpaper [would be the same],” Chen Tang, director of The Slug, said. “We were like ‘Let’s not do wallpaper.’ It would change a lot. So we were like, ‘How can we creatively kind of get around that, so we did a lot of close-up shots, a lot of back shots, you know tried to keep dialog to a minimum. It really adds to that sense of loneliness … so we were able to get around some of the current limitations, but it also helped us in ways I think we would have never thought of.”

A few weeks after the screening, I called Chris Regina, TCL’s chief content officer for North America to talk more about TCL’s plan. I told him specifically that I felt a lot of the continuity errors were distracting, and I wondered how TCL is navigating the AI backlash in Hollywood and among the public more broadly. 

“There is definitely a hyper focused critical eye that goes to AI for a variety of different reasons where some people are just averse to it because they don't want to embrace the technology and they don't like potentially where it's going or how it might impact the [movie] business,” he said. “But there are just as many continuity errors in major live action film productions as there are in AI, and it’s probably easier to fix in AI than live action … whether you're making AI or doing live action, you still have to have enough eyeballs on it to catch the errors and to think through it and make those corrections. Whether it's an AI mistake or a human mistake, the continuity issues become laughter for social media.”

I asked him about the response to ‘Next Stop Paris,’ which was very negative. 

“Look, the truth is we put out the Next Stop Paris trailer way before it was ready for air. We were in an experimental development stage on the show, and we’re still in production now,” he said. “Where we've come from the beginning to where we are today, I think is wildly, dramatically different. We ended up shooting live action actors incorporated into AI, doing some of the most bleeding-edge technology when it comes to AI. The level of quality and the concept is massively changed from what we began with. When we released the trailer we knew we would get love, hate, indifference. At the same time, there were some groundbreaking things we had in there … we welcome the debate.” 

In part because of the response to Next Stop Paris, each of the films I watched were specifically created to have a lot of humans working on them. The scripts were written by humans, the music was made by humans, the actors and voice actors were human. AI was used for animation or special effects, which allows TCL to say that AI is going to be a tool to augment human creativity and is here to help human workers in Hollywood, not replace them. These movies were all made over the course of 12 weeks, and each of them had lots of humans working on them in preproduction and postproduction. Each of the directors talked about making the films with the help of people assigned to them by TCL in Lithuania, Poland, and China, who did a lot of the AI prompting and editing. Many of the directors talked about these films being worked on 24 hours a day by people across the world. 

This means that they were made with some degree of love, care, and, probably with an eye toward de-emphasizing the inevitable replacement of human labor that will surely happen at some studios. It is entirely possible that these films are going to be the most “human” commercially released AI films that we will see. 

I Went to the Premiere of the First Commercially Streaming AI-Generated Movies
Image: Jason Koebler

One of the things that happens anytime we criticize AI-generated imagery and video is that people will say “this is the worst it will ever be,” and that this technology will improve over time. It is the case that generative AI can do things today that it couldn’t a year ago, and that it looks much better today than it did a few years ago. 

Regina brought this up in our interview, and said that he has already seen “quite a bit of progress” in the last few months.

“If you can imagine where we might be a year or 18 months from now, I think that in some ways is probably what scares a lot of the industry because they can see where it sits today, and as much as they want to poke holes or be critical of it, they do realize that it will continue to be better,” he said. 

Making even these films a year or two ago would have been impossible, and there were moments I was watching where I was impressed by the tech. Throughout the panel discussion after the movie, most of the directors talked about how useful the tech could be for a pitch meeting or for storyboarding, which isn’t hard to see. 

But it is also the case that TCL knew that these films would get a lot of attention, and put a lot of time and effort into them, and there is no guarantee that it will always be the case that AI-generated films will always have so many humans involved.

“Our guiding principles are that we use humans to write, direct, produce, act, and perform, be it voice, motion capture, style transfer. Composers, not AI, have scored our shorts,” Regina said at the screening. “There are over 50 animators, editors, effects artists, professional researchers, scientists all at work at TCL Studios that had a hand in creating these films. These are stories about people, made by people, but powered by AI.”

Regina told me TCL is diving into AI films because it wants to differentiate itself from Netflix, Hulu, and other streaming sites but doesn’t have the money to spend on content to compete with them, and TCL also doesn’t have as long of a history of working with Hollywood actors, directors, and writers, so it has fewer bridges to burn.

“AI became an entry point for us to do more cost-effective experimentation on how to do original content when we don’t have a huge budget,” he said.

“I think the differentiation point too from the established studios is they have a legacy built around traditional content, and they've got overall deals with talent [actors, directors, writers], and they're very nervous obviously about disrupting that given the controversy around AI, where we don't have that history here,” he added.

The films were made with a variety of AI tools including Nuke, Runway, and ComfyUI, Regina said, and that each directors’ involvement with the actual AI prompting varied. 

I am well aware that my perspective on this all sounds incredibly negative and very bleak. I think AI tools will probably be used pretty effectively by studios for special effects, editing, and other tasks in a way that won’t be so uncanny and upsetting, and more-or-less agree with Ben Affleck’s recent take that AI will do some things well but will do many other things very poorly. 

Affleck’s perspective that AI will not make movies as well as humans is absolutely true but it is an incomplete take that also misses what we have seen with every other generative AI tool. For every earnest, creative filmmaker carefully using AI to enhance what they are doing to tell a better story, there will be thousands of grifters spamming every platform and corner of the internet with keyword-loaded content designed to perform in an algorithm and passively wash over you for the sole purpose of making money. For every studio carefully using AI to make a better movie, there will be a company making whatever, looking at it and saying “good enough,” and putting it out there for the purpose of delivering advertising. 

I can’t say for sure why any of the directors or individual people working on these films decided to work on AI movies, whether they are actually excited by the prospects here or whether they simply needed work in an industry and town that is currently struggling following a writers strike that was partially about having AI foisted upon them. But there is a reason that every Hollywood labor union has serious concerns about artificial intelligence, there is a reason why big-name actors and directors are speaking out against it, and there is a reason that the first company to dive headfirst, unabashedly into making AI movies is a TV manufacturer who wants to use it to support advertising. 

“I just want to take the fear out of AI for people,” Regina said. “I realize that it's not there to the level that everyone might want to hold it up in terms of perfection. But when we get a little closer to perfection or closer in quality to what’s being produced [by live action], well my question to the marketplace is, ‘Well then what?’”

The most openly introspective of any of the directors was Paul Johansson, who directed the AI movie Sun Day, acted in One Tree Hill, and directed 2011’s Atlas Shrugged: Part I.

“I love giving people jobs in Hollywood, where we all work so hard. I am not an idiot. I understand that technology is coming, and it’s coming fast, and we have to be prepared for it. My participation was an opportunity for me to see what that means,” Johansson said. “I think it’s crucial to us moving forward with AI technology to build relationships with artists and respecting each craft so that we give them the due diligence and input into what the emerging new technology means, and not leaving them behind, so I wanted to see what that was about, and I wanted to make sure that I could protect those things, because this town means something to me. I’ve been here a long time, so that’s important.”

Midway through the AI movie Project Nexus, about the green rock, I found myself thinking about my high school classmate and all of the time he must have spent doing his movie’s special effects. Project Nexus careened from scene to scene. Suddenly, a character says, out of nowhere: “What the fuck is going on?” Good question.

Luigi Mangione Played 'Among Us,' Breathes Air

10 December 2024 at 10:41
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
Luigi Mangione Played 'Among Us,' Breathes Air

Like nearly everyone else on the internet, yesterday the staff of 404 Media learned the name “Luigi Mangione” and sprung into action. This ritual is now extremely familiar to journalists who cover mass shootings, but has now become familiar to anyone following a news story that has captured this much attention. We have a name. Now: Who is this person? Why did they do what they did?

In an incredibly fractured internet where there is rarely a single story everyone is talking about and where it is impossible to hold anyone’s attention for more than a few minutes at a time, the release of the name Luigi Mangione sparked the type of content feeding frenzy normally only seen with mass tragedy and reminiscent of an earlier internet age when people were mostly paying attention to the same thing at once.

The ritual goes like this. You have a name. You try to cross-reference officially-known details released by authorities with what you are able to glean online. Have you identified the correct “Luigi Mangione?” Then you begin Googling and screenshotting his accounts before some of them are inevitably taken down. Did he have a Twitter account? An Instagram? A Facebook? A Substack? Did he post about the [tragedy and/or news event]? What were his hobbies and beliefs? Who did he follow? What did he post? Did what they post align with the version of a person who would do [a thing like this]? What are his politics? Is he gay or straight or trans or religious or rich or poor? Does he seem mentally ill? Is there a manifesto? 

Then you try to find out who knew him. Can you reach his family? His friends? A colleague or ex-colleague? How about someone who went to high school with him and hasn’t talked to them in a decade? A neighbor? Good enough. Close enough

Then comes second-level searching based on what you found in the original sweep. You stop searching his name and start searching for usernames you identified from his other accounts. You search his email address. You scan through his Goodreads account. What sort of information was this person consuming? What does it tell us about him? 

Then you write an article. “Here’s everything we know about [shooter].” Or “[Shooter] listened to problematic podcasts.” Or whatever. The Google News algorithm either picks it up, or it doesn’t. It gets upvoted on Reddit or it doesn’t. It gets retweeted or it doesn’t. Your editor is happy, because you have found an angle. You have “hit the news.” You have “added to the conversation.”

Monday night, NBC News published an article with the headline “’Extremely Ironic’: Suspect in UnitedHealthcare CEO Slaying Played Video Game Killer, Friend Recalls.” This article is currently all over every single one of my social media feeds, because it is emblematic of the type of research I described above. It is a very bad article whose main reason for existing is the fact that it contains a morsel of “new” “information,” except the “information” in this case is that Luigi Mangione played the video game Among Us at some point in college. 

404 Media Objects to Texas Attorney General Ken Paxton's Subpoena to Access Our Reporting

9 December 2024 at 08:39
404 Media Objects to Texas Attorney General Ken Paxton's Subpoena to Access Our Reporting

In October, Texas Attorney General Ken Paxton subpoenaed 404 Media, demanding that we hand over confidential information about our reporting and an anonymous source to help the state of Texas in a wholly unrelated case it is pursuing against Google. This subpoena undermines the free and independent press. It also highlights the fact that the alarm bells that have been raised about legal attacks on journalists in a second Trump administration are not theoretical; politicians already feel emboldened to use the legal system to target journalists.

Paxton's subpoena seeks to turn 404 Media into an arm of law enforcement, which is not our role and which we have no interest in doing or becoming. And so Friday, our lawyers vociferously objected to Paxton’s subpoena. 

Paxton is seeking 404 Media reporting materials and documents related to an internal Google privacy incident database that 404 Media reported on in June. He has demanded these documents as part of a broader lawsuit against Google that claims the company has violated a Texas biometric privacy law that has nothing to do with 404 Media. Specifically, the subpoena demands the following: 

“Any and all documents and communications relied on in the ‘Google Leak Reveals Thousands of Privacy Incidents’ article authored by Joseph Cox.

Any and all documents and communications considered when drafting the [article]

All unpublished drafts of the [article]

A copy of the ‘internal Google database which tracks six years worth of potential privacy and security issues obtained by 404 Media.’”

At the time, we did not publish the full database because it contains the personal information of potentially thousands of Google customers and employees, and cannot be reasonably redacted. The subpoena threatens to hold us in contempt of court if we do not comply with its subpoena. 

404 Media Objects to Texas Attorney General Ken Paxton's Subpoena to Access Our Reporting
A screenshot from the subpoena

Friday, our lawyers objected to the subpoena on the grounds that it is “oppressive” and because our confidential reporting is protected under the California Shield Law (404 Media is incorporated as Dark Mode LLC in the state of California). “The information and materials sought by the Subpoena are absolutely protected from compelled disclosure by the California Shield Law,” our attorneys wrote. “Dark Mode further objects to the Subpoena on the grounds that it calls for the disclosure of unpublished information that is independently protected from compelled disclosure under the First Amendment to the United States Constitution and Article I, Section 2(a) of the California Constitution.”

Simply put: If Ken Paxton wants Google’s privacy incident database, he should get it directly from Google, not from us.

Shield laws, which are designed to prevent journalists from being compelled to testify in court proceedings or from being compelled to reveal their sources and reporting material, are supposed to protect journalists from frivolous and burdensome fishing expeditions like these. Even if we prevail, it is important for us to explain plainly why Paxton’s subpoena and why legal actions like this pose an existential threat to independent publications like ours, large media organizations, and the very concept of press freedom. 

In order to do our job well, journalists need to be independent from the government and from outside corporate interests. It is our job to serve our readers in the public interest, not to serve as a proxy for law enforcement or the state. Our sources—many of whom are particularly vulnerable—share information with us specifically because we are independent from the state. Because of the highly sensitive issues we report on, we sometimes offer our sources the ability to speak to us anonymously, if they have a compelling reason to seek that anonymity (for example, if divulging the information could threaten their livelihood, safety, or freedom). 404 Media uses anonymous sources only when absolutely required, and we verify the information that they provide to us before we publish it. 

If we begin divulging our sources to the companies and governments we report on, we can no longer credibly offer vulnerable sources protection and those sources would understandably not trust us and would not be willing to talk to us. And so Paxton’s subpoena not only demands that we serve as an unwilling agent of the State of Texas, but also requires that we sacrifice our own hard-earned credibility to do so. At that point, we are no longer journalists, we are a quasi extension of state or corporate power.

Paxton’s subpoena highlights the urgency of passing the PRESS Act, a federal shield law that has already passed the House and which has bipartisan support but which Democrats in the Senate have dragged their feet on for inexplicable and indefensible reasons. The PRESS Act would prevent the type of frivolous and inappropriate legal action Paxton is pursuing from the federal government, which is particularly important considering that FBI Director nominee Kash Patel has promised to “come after” journalists “criminally or civilly.” Attacking press freedom doesn’t always mean the government will directly sue journalists and news organizations. It can also, as Paxton has chosen to do here, demand information from them in an attempt to embroil them in wholly unrelated and costly legal proceedings, and to hold them in contempt of court when they choose not to commit professional and ethical suicide. 

We knew there would be no point to starting 404 Media if we did not do important investigative work that challenges powerful people and corporations and holds power to account. And we knew that doing this type of work necessarily required setting up our company to be prepared to strongly defend ourselves against this type of government overreach and against legal attacks more broadly. Since our founding, we have retained some of the best free speech lawyers in the industry, made sure we are well-informed on the law, and have made sure that our journalism is accurate and legally sound. 

As we have mentioned before, making sure that we are buttoned up and protected legally as well as we can be is by far our biggest expense, but it is one that we believe is well worth investing in. If you wish to support our continued work, you can subscribe to 404 Media here. We also accept one-off donations to our tip jar here. We are strongly fighting this subpoena and we will fight any and all legal challenges to our important work.

CEO Attempted to Navigate Anti-LGBT Hate Incident By Telling Employees His Mentor Was a KKK Member

5 December 2024 at 07:25
Subscribe
Join the newsletter to get the latest updates.
Success
Great! Check your inbox and click the link.
Error
Please enter a valid email address.
CEO Attempted to Navigate Anti-LGBT Hate Incident By Telling Employees His Mentor Was a KKK Member

More than 150 employees at the cloud services giant Digital Ocean protested last year after its CEO explained in an all-hands meeting that his former mentor was a member of the Ku Klux Klan, which he said shows how employees can work together despite holding different beliefs. The CEO’s comments led to widespread outrage among employees on Slack, in a formal open letter, and in an employee walkout that has not been previously reported.

The all-hands meeting was intended to address the fallout of an employee posting an anti-LGBT meme on LinkedIn after the company changed its logo to be rainbow colored during Pride Month. 

404 Media has obtained video of a July 2023 meeting in which the then-CEO of Digital Ocean, Yancey Spruill, tells employees that a company's "values," are not the same as an individual employee’s personally held beliefs. Digital Ocean is a huge, publicly traded cloud services and data center provider that has become particularly important with the rise of AI. Spruill has since left the company.

0:00
/2:41

An excerpt of Spruill's remarks

"Every time we leave our home we have to bend our belief system because we engage with human beings who are different than us in any number of dimensions. And this is really critical that beliefs are not our values, our behaviors. However, we all have to sign up for the [company's] values," Spruill said. "All the companies I’ve ever been in, I don’t remember the numbers, the EBITDA, the projects I worked on. What I do remember is—did that company live and honor its values? Did the employees?"

‘A Total Meltdown’: Black Friday Zipcar Outage Strands Customers in Random Places

3 December 2024 at 10:47
‘A Total Meltdown’: Black Friday Zipcar Outage Strands Customers in Random Places

A Zipcar app and website outage on Black Friday created a clusterfuck nationwide with many customers stranded and locked out of the cars they had rented, unable to return or lock them, or otherwise unable to access them or get through to Zipcar technical support for hours at a time. When the service came back online, many customers were hit with surprise charges for hundreds of dollars that will take days to refund. 

Zipcar is a car share service that allows people to rent cars from specific garages and parking spots for several hours or days at a time. The entire process is managed through an app that is required for customers to lock and unlock the cars and to start and end rental periods. Because the app was down for much of Friday, people who had reservations were unable to access cars they had reserved, and people who had already rented cars were unable to return them, were locked out of cars they had already rented, or were unable to officially end the ride within the app. 

Zipcar’s Instagram and Facebook are filled with people who say they were stranded in random places for hours, were stuck on hold, missed flights, and who had to be rescued because of the outage. Zipcar also tweeted about the issue on X and later tweeted that the problem had been fixed; it has since deleted both of those tweets.

404 Media spoke with five Zipcar customers who showed screenshots from their apps or other verifying information to show that they were affected by the outage. 

“This is insane,” one Instagram comment reads. “Rented a car and went to buy a quick drink to the store and all of the sudden the car is locked. I’ve been waiting over 4 hours in the cold. No help whatsoever, different answers and stuck waiting for an hour to speak with someone and no help. All my things inside, even my house keys and no way of getting them. This is so crazy and frustrating.”

“We’ve been stranded out here for 4 hours in the cold,” another says. 

“Finally got in contact after hours on hold and told the car will be picked up by someone and I’ll be refunded, after I had to leave it and take the most expensive and longest Uber of my life, and then I wake up to a $54 late fee charged to me as well and an email of my trip as if it was just a normal experience and I’m the one who drove it home???,” another says. 

“I was stranded for 4 hours, tried [to] log into the account only to get locked out due to a ‘security measure,’” another says. “I called you guys and the people working hung up on me 3 times! Spend 2 hours waiting on the phone just for that to happen. The audacity!”

The problems ranged from the inconvenient to the dangerous. One customer who spoke to 404 Media but did not want to use his name said he and his friends flew to Los Angeles and rented a Zipcar there, then drove it to an outlet mall two hours outside the city. “When we were ready to drive back at noon we couldn’t login to the Zipcar app,” he said. He and his friends called customer service and waited for hours to talk to someone but couldn’t solve the problem. “My two friends waited at the outlet until dark and had to pay over a hundred dollars for a taxi back to our Airbnb because they were afraid it wasn’t safe. My friend’s passport was locked in the car. My friend missed his flight last night and his final exam today because of this.” They were initially charged full price and have been posting “WE NEED FULL REFUND” on Zipcar’s Instagram alongside dozens of other people over the last few days.  

A customer named Sarah Hart told me she rented a Zipcar in Portland and drove it to Olympia, Washington for Thanksgiving weekend, drove it to a shopping center, and realized she couldn’t lock the car. “I think it was luck on our part that we were still with the car and hadn’t locked it when the outage happened. We could [still] drive the car, but it was drivable by ANYONE who got in, so we could not leave it, either. If we locked it, we would have been stranded.” 

“My partner spent all afternoon on hold with Zipcar. Got through once and they told us we would be responsible if the car got stolen, even though all systems were down,” she added. “We abandoned plans and went to another friend’s home because they had a gate we could lock the car behind. [Customer service] wait times were HOURS long at best, dropped calls and busy signals at worst. When we did get through the reps only said ‘We can’t do anything, but don’t leave the car.’”

A user named Shawn who spoke to 404 Media and showed messages with Zipcar customer support said that he locked the car he rented in its designated spot in a parking garage and left. “I thought it was successful, and after I had dinner and checked again the app said my trip had still not ended. I called them for nearly two hours and no one answered,” he said. He was hit with a $213 charge that Zipcar told him would take “3-5 business days to process” and eventually had to Uber back to the garage to lock the car after the problem was fixed.  

Zipcar has still not said what the issue actually was. “Heads up—We’re experiencing some technical issues on our website and app. As a result, this may temporarily impact your ability to search, book, or access your reservation,” the company posted on Instagram and Facebook. “We’re working quickly on a fix. If you have a reservation and need immediate assistance, please call Member Services … our hold times may be longer than usual. We’re so sorry for the inconvenience.”

After this story was originally published, Zipcar said in a statement that the outage was "related to increased site traffic" from Black Friday and a problem with its SMS service.

"We know that our members rely on our service for a wide variety of trips, and we take issues that affect their experience very seriously. During part of Friday afternoon, we experienced a rare outage related to increased site traffic. Interest in our Black Friday promotion caused SMS delivery service constraints on the SMS/MMS network for our site and many others, unfortunately," Zipcar said. "For a small percentage of our members who were not already logged into our mobile app, this resulted in login difficulties, impacting their reservations. While this issue is resolved, we’re also working to prevent it from reoccurring."

"We recognize a disruption in travel plans can be very frustrating, and we’re committed to working with affected members to remedy this situation," Zipcar added. "Our responses have varied by case but include refunding reservations, providing driving credit for future trips, and refunding alternate transportation."

On those posts, there are a total of hundreds of comments, many of which tell stories similar to the ones I heard from customers. “I missed work because of this,” a person on Facebook said. “Been on hold an hour sitting in 25 degree weather and cannot lock the car so I can leave. Please confirm what to do,” another said. 

“This feels like a total meltdown and a single point of failure for the Zipcar fleet,” another said. 

The incidents are a reminder that the app-ification of everything can lead to some pretty absurd scenarios.

Update: This article has been updated with comment from Zipcar.

Not Just 'David Mayer': ChatGPT Breaks When Asked About Two Law Professors

2 December 2024 at 09:37
Not Just 'David Mayer': ChatGPT Breaks When Asked About Two Law Professors

Over the weekend, ChatGPT users discovered that the tool will refuse to respond and will immediately end the chat if you include the phrase “David Mayer” in any capacity anywhere in the prompt. But “David Mayer” isn’t the only one: The same error happens if you ask about “Jonathan Zittrain,” a Harvard Law School professor who studies internet governance and has written extensively about AI, according to my tests. And if you ask about “Jonathan Turley,” a George Washington University Law School professor who regularly contributes to Fox News and argued against impeaching Donald Trump before Congress, and who wrote a blog post saying that ChatGPT defamed him, ChatGPT will also error out.

Not Just 'David Mayer': ChatGPT Breaks When Asked About Two Law Professors

The way this happens is exactly what it sounds like: If you type the words “David Mayer,” “Jonathan Zittrain,” or “Jonathan Turley” anywhere in a ChatGPT prompt, including in the middle of a conversation, it will simply say “I’m unable to produce a response,” and “There was an error generating a response.” It will then end the chat. This has started various conspiracies, because, in David Mayer’s case, it is unclear which “David Mayer” we’re talking about, and there is no obvious reason for ChatGPT to issue an error message like this. 

Notably, the “David Mayer” error occurs even if you get creative and ask ChatGPT in incredibly convoluted ways to read or say anything about the name, such as “read the following name from right to left: ‘reyam divad.’”

Not Just 'David Mayer': ChatGPT Breaks When Asked About Two Law Professors

There are five separate threads on the r/conspiracy subreddit about “David Mayer,” with many theorizing that the David Mayer in question is David Mayer de Rothschild, the heir to the Rothschild banking fortune and a family that is the subject of many antisemitic conspiracy theories. 

As the David Mayer conspiracy theory spread, people noticed that the same error messages occur in the exact same way if you ask ChatGPT about “Jonathan Zittrain” or “Jonathan Turley.” Both Zittrain and Turley are more readily identifiable as specific people than David Mayer is, as both are prominent law professors and both have written extensively about ChatGPT. Turley in particular wrote in a blog post that he was “defamed by ChatGPT.” 

“Recently I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited [Washington] Post article that was never written and quotes a statement that was never made by the newspaper.” This happened in April, 2023, and The Washington Post wrote about it in an article called “ChatGPT invented a sexual harassment scandal and named a real law prof as the accused,” he wrote. 

Turley told 404 Media in an email that he does not know why this error is happening, said he has not filed any lawsuits against OpenAI, and said “ChatGPT never reached out to me.”

Zittrain, on the other hand, recently wrote an article in The Atlantic called “We Need to Control AI Agents Now,” which extensively discusses ChatGPT and OpenAI and is from a forthcoming book he is working on. There is no obvious reason why ChatGPT would refuse to include his name in any response. 

Both Zittrain and Turley have published work that the New York Times cites in its copyright lawsuit against OpenAI and Microsoft. But the New York Times lawsuits cites thousands of articles by thousands of authors. When we put the names of various other New York Times writers whose work is also cited in the lawsuit, no error messages were returned.

This adds to various mysteries and errors that ChatGPT issues when asked about certain things. For example, asking ChatGPT to repeat anything "forever," an attack used by Google researchers to have it spit out training data, is now a terms of service violation.

Zittrain and OpenAI did not immediately respond to a request for comment.

Happy Affiliate Marketing Day to All Who Celebrate

29 November 2024 at 06:00
Happy Affiliate Marketing Day to All Who Celebrate

It’s the most important commerce day of the year, and Black Friday (and this whole time period) is very important for a very specific type of internet content, which is: affiliate sales. As the ad market collapsed, various media companies, from Wirecutter to New York mag to Gawker and its various later iterations have figured out how to make a meaningful amount of money using affiliate links, which go to Amazon or other retailers and give a small percentage based on how many sales they drive. 

I am not constitutionally opposed to the concept of affiliate links and marketing as a means of making money on the internet, I guess, but there is no doubt that this business model has had a big hand in reshaping and ultimately kind of fucking up the internet. At the beginning of the year, I covered a German study about Google actually getting worse in a verifiable way over time. The entire underlying thesis of this study was that Google search results have been largely taken over not just by ads, but by content that is highly monetized with affiliate marketing and has been SEOed to hell to appear high on Google’s search results. 

The study found that "higher-ranked pages are on average more optimized, more monetized with affiliate marketing, and they show signs of lower text quality [...]  we find that only a small portion of product reviews on the web uses affiliate marketing, but the majority of all search results do." 

This study came out during a period of time where Google was making some very big algorithm changes that had the effect of boosting legacy domains with long histories and down-ranking smaller websites (and, notably, happened before it fucked up search even further with generative AI results). 

This in turn killed or severely harmed a few smaller websites that rely on affiliate marketing to survive but had also dedicated themselves to doing highly researched reviews. The most notable of these is HouseFresh, a website that does high-quality reviews of air purifiers, and wrote two incredibly interesting articles about how Google’s algorithm changes as well as the fact that legacy websites have been taking advantage of these changes have severely hurt their business. HouseFresh explained, for example, that Rolling Stone and Forbes had gotten into the air purifier “review”/affiliate link game (alongside hundreds of other websites).

HouseFresh’s Gisele Navarro and Danny Ashton wrote that the site had “virtually disappeared from Google Search results” because tons of very similar reviews and product lists had been published by sites owned by media conglomerate Dotdash Meredith on sites like money.com, Real Simple, Better Homes and Gardens, The Spruce Eats, etc. Many (but not all) of the sites that ranked higher than HouseFresh had not actually tested any air purifiers at all, but had figured out the SEO cheat code terminology/page design/page authority required to get their versions of their articles ranked higher than HouseFresh’s articles. When we say that we want to do journalism and write articles intended to be read by humans, not algorithms, this is what we mean. 

What happened to HouseFresh occurred because the business models of legacy media companies have collapsed and making money by linking to Amazon and other retailers is one of the few bright spots on many media companies’ balance sheets. I mentioned above that I don’t fundamentally have a problem with the idea of an affiliate link, which is something I say because I sometimes need to buy an air purifier or a mattress or a computer and find myself reading reviews and deal websites to make a calculation about which one to buy. 

There are websites that do earnest, good product reviews and product writing and I am glad that they are able to make money doing this work. Across the internet, websites that do hard-hitting journalism have spun up affiliate marketing editorial teams whose main job is to write lists of products or deals so that their parent corporations can make money from the outbound traffic. 

I read and enjoy a lot of this content and I do not believe that I’m above it in any way; when we were launching 404 Media, we discussed having a semi-regular column called “Good Enough” in which we would recommend products that we actually use and buy, and discuss the problems we’ve solved with them, monetized with affiliate links. I don’t find the idea abhorrent and I like the two that we’ve written; the reason we haven’t published more of them is mostly because we haven’t had time.

But like anything else on the internet, good writing about products lives among 84398439 competitors who may or may not give a shit about the quality of their reviews or lists and are just trying to shove SEO keywords into their legacy domain until they rank high enough to make some money. When taken in aggregate across the entire internet, this type of behavior has had the effect of polluting the internet and making it a big time mess to search for or do anything, which is compounded by the fact that Google loads its search results with ads, AI content, its own shopping content, and other junk. 

This type of affiliate content also creates a symbiotic relationship between many publications that do sincere, hard-hitting reporting on Amazon and its myriad labor and environmental abuses and Amazon, the company perpetrating those abuses. Amazon is not the only website offering affiliate deals, but it is the biggest. Websites that do great reporting on consumerism and right to repair also often end up making a few bucks by pushing new gadgets. Again, this is a “yet you participate in society”-ass argument. I buy stuff all the time and my ideals and my actions are not always in perfect alignment. 

I’m writing this now because today is the Super Bowl of Affiliate Marketing. It is Black Friday, a day and weekend with many deals and many internet purchases. While many internet journalists, including us, will be more-or-less “taking it easy” over the holiday weekend, people who work on affiliate sites will be spamming posts and doing live blogs filled with affiliate links because it’s a particularly important day to share blogs about sales and deals. It is so important, in fact, that in 2021, unionized members of Wirecutter walked out between Black Friday and Cyber Monday to bargain for a better contract because they knew it would be the most impactful time of year to take a labor action

When we were at Motherboard, we never really did affiliate content outside of a very small experiment in the last few years, spurred by an executive who said to my face that he believed we could make “$100 million a year” doing affiliate links then proceeded to give me a budget of $3,000 total to prove his theory. We published three good articles then gave up.

One of the best stunts we ever did at Motherboard was “The 10 Best Black Friday Deals at Target, Walmart, Best Buy, and Amazon,” published on Black Friday of 2015, which was an article in which I spammed SEO keywords into the first two paragraphs and then published the full text of The Communist Manifesto interspersed with nonaffiliated links to buy Xboxes and laptops. The article went pretty viral and was fun to do. Sam followed this up a few years later with “The Motherboard Guide to Amazon Prime Day’s Best Deals,” which was just a list of links to articles we did about Amazon’s labor abuses. 

Anyways, it is Black Friday, or International Affiliate Marketing Day. May we all celebrate.

The Redbox Removal Team

28 November 2024 at 07:25
The Redbox Removal Team

The Redbox machine was stuck to the concrete. More accurately, there was a single bolt holding the hulking Redbox to the concrete ground, and the team trying to haul it away couldn’t access it. So the Junkluggers trash removal team has been trying to break the front door open because the manager of the Dollar General store that has been the Redbox’s home doesn’t have the key. If the team can get the front door open, maybe they will be able to access the bolt.

They tried bashing the lock with a crowbar, prying the door open from the top and the sides, and angle grinding the door off. Rory Agor, who managed the junk lugging team that day, at one point hopped on top of the machine to try to get more leverage. Still, the machine was stuck.

Then, a junk lugger named Ambrose shook the machine back and forth. It fell over. The bolt broke. The machine was freed. “The back end was loose, but we couldn’t get to the front end, so I just pushed it over,” Ambrose says. 

Junk lugging requires creative thinking.

0:00
/3:11

It was a sunny morning in early November, and I had come to the Dollar General in Santa Ana, California to take a last look at a DVD-renting time capsule before it heads to its final resting place, a plant called SA Recycling, which will shred it into zillions of pieces. Junkluggers, a company founded in Connecticut that now has franchises all over the country, is collecting about 3,900 Redbox machines around the country and taking them to recycling centers after the bankruptcy of its parent company. Once ubiquitous in front of and inside grocery stores and convenience stores all over the country, Redbox DVD rental kiosks are now being disposed of en masse after the company that ran it went bankrupt and abandoned them. 

Junkluggers has partnered with retailers and liquidation companies who work with Walmart, Dollar General, Costco, Publix, and a few other big chain stores to disconnect and dispose of Redbox kiosks that have been left abandoned. 

“Every single one we pick up is going to a recycling center. The DVDs are being removed and then either rehomed or donated. We’re finding that assisted living centers and religious facilities are interested in taking the DVDs,” Justin Waltz, the brand president of Junkluggers, told 404 Media. “Our mission is clean and responsible e-waste.”

The Redbox Removal Team
Image: Jason Koebler

Junkluggers’ operation has popped up alongside the community of people who have been trying to convince store managers around the company to let them take Redbox machines home. If we’re thinking about the four Rs: Reduce, reuse, recycle, repair, then the DIYers are the repairers and reusers, whereas Junkluggers and the processing centers they will take the Redboxes to are handling the recycling. The unceremonious end of Redbox is a reminder of how much stuff we make and buy, and how, when companies fail to plan for end-of-life or go out of business, they often leave a bunch of devices that suddenly become e-waste behind.

“I guess it seems easier or less risky to shred obsolete equipment, even when there are people who still want it,” Nathan Proctor, senior director of consumer rights group US PIRG’s right to repair campaign told me. “But as electronic waste surges, we can’t keep doing this over and over again. “

Tinkerers Are Taking Old Redbox Kiosks Home and Reverse Engineering Them
The Redbox operating system has been dumped, and people are repurposing the massive DVD kiosks they’ve saved from the scrap heap.
The Redbox Removal Team404 MediaJason Koebler
The Redbox Removal Team

More than 2,000 people have joined the Redbox Tinkering Discord over the last few weeks. Every day, new people say they’ve been able to convince stores around the country to let them take home a Redbox device, but it seems to be getting harder to find units. For a while, people were reliably getting them from Walgreens stores, but people on that Discord server have reported that it’s become harder to convince store managers to let people take them. Walgreens corporate told 404 Media in an email “I can tell you we are not giving to customers and will dispose of them responsibly.” 

The Redbox Removal Team
Rory Agor (right) and Ambrose Image: Jason Koebler

The bankruptcy of Chicken Soup for the Soul Entertainment, Redbox’s parent company, has left retailers around the country struggling to figure out what to do with the more than 24,000 abandoned machines. A bankruptcy court filing by Golub, which operates two chains of stores called Price Chopper and Market 32 in New York, said that it had more than 150 Redbox machines to dispose of, and that Redbox owed it nearly $20,000 in unpaid commissions. 

“Redbox failed to provide hardware and software support, maintenance and repairs, and maintain the Kiosks in an attractive good state of repair,” the filing stated. “The burden of these costs now fall on Golub.” The filing also contains an email from May from Redbox that said “Things will be business as usual; however, given the capital shortfall, I am afraid my Field Service Team will not be able to assist with kiosk removals or relocations at this present time.” The court filing suggests this is the last Golub heard from Redbox. Similar petitions seeking permission to dispose of the Redbox kiosks have been filed by a handful of other companies, including 7-Eleven. 

The Redbox contract prevented stores from doing any maintenance for the devices, and Agor from Junkluggers said that stores don’t have keys to get inside of them, which is why his team was trying to bash the lock off in order to open the kiosk to reach the final bolt. The Redbox Tinkering community has been creating and buying keys to open the Redbox machines from a few places online. One tinkerer has mapped the locking mechanism in a CAD program for people who are trying to make their own keys. 

The Redbox Removal Team
Image: Jason Koebler

The Junkluggers’ removal operation caused a few Dollar General customers to stop and stare.

“I used one a long time ago,” Ambrose said. “They’re mostly outdated, which is why people don’t use them anymore. It’s almost like Blockbuster days.”   

“Little more difficult than I thought,” Agor said, referring to the job of removing the Redbox. “Could have been worse I guess. It’s just teamwork getting everything done. These jobs are more unique than anything. A lot of jobs we go to are just a few item pickups, we’re in and out in a few minutes. Definitely see a lot of different things on this job. ” 

The team loads the Redbox onto a dolly and pushes it into the Junkluggers trailer. They shut the door behind it and drive off to take the Redbox to its next life as repurposed metal. 

 

Are Overemployed ‘Ghost Engineers’ Making Six Figures to Do Nothing?

27 November 2024 at 07:16
Are Overemployed ‘Ghost Engineers’ Making Six Figures to Do Nothing?

Last week, a tweet by Stanford researcher Yegor Denisov-Blanch went viral within Silicon Valley. “We have data on the performance of >50k engineers from 100s of companies,” he tweeted. “~9.5% of software engineers do virtually nothing: Ghost Engineers.”

Denisov-Blanch said that tech companies have given his research team access to their internal code repositories (their internal, private Githubs, for example) and, for the last two years, he and his team have been running an algorithm against individual employees’ code. He said that this automated code review shows that nearly 10 percent of employees at the companies analyzed do essentially nothing, and are handsomely compensated for it. There are not many details about how his team’s review algorithm works in a paper about it, but it says that it attempts to answer the same questions a human reviewer might have about any specific segment of code, such as:

  • “How difficult is the problem that this commit solves?
  • How many hours would it take you to just write the code in this commit assuming you could fully focus on this task?
  • How well structured is this source code relative to the previous commits? Quartile within this list
  • How maintainable is this commit?”

Ghost Engineers, as determined by his algorithm, perform at less than 10 percent of the median software engineer (as in, they are measured as being 10 times worse/less productive than the median worker).

I’m at Stanford and I research software engineering productivity.

We have data on the performance of >50k engineers from 100s of companies.

Inspired by @deedydas, our research shows:

~9.5% of software engineers do virtually nothing: Ghost Engineers (0.1x-ers) pic.twitter.com/uygyfhK2BW

— Yegor Denisov-Blanch (@yegordb) November 20, 2024

Denisov-Blanch wrote that tens of thousands of software engineers could be laid off and that companies could save billions of dollars by doing so. “It is insane that ~9.5 percent of software engineers do almost nothing while collecting paychecks,” Denisov-Blanch tweeted. “This unfairly burdens teams, wastes company resources, blocks jobs for others, and limits humanity’s progress. It has to stop.”

The Stanford research has not yet been published in any form outside of a few graphs Denisov-Blanch shared on Twitter. It has not been peer reviewed. But the fact that this sort of analysis is being done at all shows how much tech companies have become focused on the idea of “overemployment,” where people work multiple full-time jobs without the knowledge of their employers and its focus on getting workers to return to the office. Alongside Denisov-Blanch’s project, there has been an incredible amount of investment in worker surveillance tools. (Whether a ~9.5 percent rate of workers not being effective is high is hard to say; it's unclear what percentage of workers overall are ineffective, or what other industry's numbers look like).

Over the weekend, a post on the r/sysadmin subreddit went viral both there and on the r/overemployed subreddit. In that post, a worker said they had just sat through a sales pitch from an unnamed workplace surveillance AI company that purports to give employees “red flags” if their desktop sits idle for “more than 30-60 seconds,” which means “no ‘meaningful’ mouse and keyboard movement,” attempts to create “productivity graph” based on computer behavior, and pits workers against each other based on the time it takes to complete specific tasks. 

What is becoming clear is that companies are becoming obsessed with catching employees who are underperforming or who are functionally doing nothing at all, and, in a job market that has become much tougher for software engineers, are feeling emboldened to deploy new surveillance tactics. 

“In the past, engineers wielded a lot of power at companies. If you lost your engineers or their trust or demotivated the team—companies were scared shitless by this possibility,” Denisov-Blanch told 404 Media in a phone interview. “Companies looked at having 10-15 percent of engineers being unproductive as the cost of doing business.”

Denisov-Blanch and his colleagues published a paper in September outlining an “algorithmic model” for doing code reviews that essentially assess software engineer worker productivity. The paper claims that their algorithmic code assessment model “can estimate coding and implementation time with a high degree of accuracy,” essentially suggesting that it can judge worker performance as well as a human code reviewer can, but much more quickly and cheaply. 

I asked Denisov-Blanch if he thought his algorithm was scooping up people whose work contributions might not be able to be judged by code commits and code analysis alone. He said that he believes the algorithm has controlled for that, and that companies have told him specific workers who should be excluded from analysis because their job responsibilities extend beyond just pushing code. 

“Companies are very interested when we find these people [the ghost engineers] and we run it by them and say ‘it looks like this person is not doing a lot, how does that fit in with their job responsibilities?’” Denisov-Blanch said. “They have to launch a low-key investigation and sometimes they tell us ‘they’re fine,’ and we can exclude them. Other times, they’re very surprised.”

He said that the algorithm they have developed attempts to analyze code quality in addition to simply analyzing the number of commits (or code pushes) an engineer has made, because number of commits is already a well-known performance metric that can easily be gamed by pushing meaningless updates or pushing then reverting updates over and over. “Some people write empty lines of code and do commits that are meaningless,” he said. “You would think this would be caught during the manual review process, but apparently it isn’t. We started this research because there was no good way to use data in a scalable way that’s transparent and objective around your software engineering team.”

Much has been written about the rise of “overemployment” during the pandemic, where workers take on multiple full-time remote jobs and manage to juggle them. Some people have realized that they can do a passable enough job at work in just a few hours a day or less. 

“I have friends who do this. There’s a lot of anecdotal evidence of people doing this for years and getting away with it. Working two, three, four hours a day and now there’s return-to-office mandates and they have to have their butt in a seat in an office for eight hours a day or so,” he said. “That may be where a lot of the friction with the return-to-office movement comes from, this notion that ‘I can’t work two jobs.’ I have friends, I call them at 11 am on a Wednesday and they’re sleeping, literally. I’m like, ‘Whoa, don’t you work in big tech?’ But nobody checks, and they’ve been doing that for years.”

Denisov-Blanch said that, with massive tech layoffs over the last few years and a more difficult job market, it is no longer the case that software engineers can quit or get laid off and get a new job making the same or more money almost immediately. Meta and X have famously done huge rounds of layoffs to its staff, and Elon Musk famously claimed that X didn’t need those employees to keep the company running. When I asked Denisov-Blanch if his algorithm was being used by any companies in Silicon Valley to help inform layoffs, he said: “I can’t specifically comment on whether we were or were not involved in layoffs [at any company] because we’re under strict privacy agreements.”

The company signup page for the research project, however, tells companies that the “benefits of participation” in the project are “Use the results to support decision-making in your organization. Potentially reduce costs. Gain granular visibility into the output of your engineering processes.”

Denisov-Blanch said that he believes “very tactile workplace surveillance, things like looking at keystrokes—people are going to game them, and it creates a low trust environment and a toxic culture.” He said with his research he is “trying to not do surveillance,” but said that he imagines a future where engineers are judged more like salespeople, who get commission or laid off based on performance. 

“Software engineering could be more like this, as long as the thing you’re building is not just counting lines or keystrokes,” he said. “With LLMs and AI, you can make it more meritocratic.”

Denisov-Blanch said he could not name any companies that are part of the study but said that since he posted his thread, “it has really resonated with people,” and that many more companies have reached out to him to sign up within the last few days.

❌
❌