A powerful AI tool can predict with high accuracy the location of photos based on features inside the image itself—such as vegetation, architecture, and the distance between buildings—in seconds, with the company now marketing the tool to law enforcement officers and government agencies.
Called GeoSpy, made by a firm called Graylark Technologies out of Boston, the tool has also been used for months by members of the public, with many making videos marveling at the technology, and some asking for help with stalking specific women. The company’s founder has aggressively pushed back against such requests, and GeoSpy closed off public access to the tool after 404 Media contacted him for comment.
Based on 404 Media’s own tests and conversations with other people who have used it and investors, GeoSpy could radically change what information can be learned from photos posted online, and by whom. Law enforcement officers with very little necessary training, private threat intelligence companies, and stalkers could, and in some cases already are, using this technology. Dedicated open source intelligence (OSINT) professionals can of course do this too, but the training and skillset necessary can take years to build up. GeoSpy allows essentially anyone to do it.
This week, Google shoved various capabilities from Gemini, its AI tool, into Workspaces for Business and Enterprise customers, including associated Gmail accounts. You might now see buttons for “Summarize this email,” which when clicked will provide a bullet point list of what the email (allegedly) says and, in email threads, peoples’ sentiment towards it in their replies. There’s also a button in the top right that brings up a Gemini prompt bar, and a couple of ways Gemini offers to help. “Show unread emails from today,” and “show unread emails from this week,” are two I’m looking at right now.
Many people are going to love this. Others are going to want to run away from it as quickly as possible. Many people—incluing us—are already furious that they were automatically opted into it. Turns out, disabling it isn’t straightforward, as I found out why I tried to opt 404 Media out of it.
“Today we announced that we’re including the best of Google AI in Workspace Business and Enterprise plans without the need to purchase an add-on,” Google wrote in a blog post on Wednesday.
The “Summarize this email” button took me by surprise. I opened my Gmail iOS app and it was just there. When I asked a Google spokesperson if Google gave clients a heads up this was coming, they provided me with a couple of links (including that one above), both of which were published Wednesday. So, no not really.
I tried out the email summarize feature on a non-sensitive email Emanuel had just forwarded me. It was an obvious scam email, with someone pretending to be from the family of Bashar Al-Assad and who said they could make us a lot of money. Emanuel forwarded me the email and joked “sounds good.”
Gemini’s summary said “Mohammed Karzoon, a former member of the Syrian President al-Assad’s cabinet, reaches out to Emanuel Maiberg to discuss potential investment portfolios.” The second bullet point read “Emanuel Maiberg expresses interest in the proposition.” Gemini, to little surprise, did not detect that Emanuel was being heavily sarcastic, a beautifully human act.
I then tried to opt us out of these sorts of Gemini features. I logged into Google Workspace, clicked the “Generative AI” drop down menu on the left, then clicked “Gemini app.” I changed the service status to “OFF for everyone.”
Nope, that’s wrong. The Google spokesperson told me that button referred to gemini.google.com, which is the Gemini app, not its integration with Workspace. I also tried in another section called “Gemini for Workspace” which sounded promising but that wasn’t helpful either.
I actually had to go to account, account settings, and “Smart features and personalization” where an administrator can set a default value for users. The spokesperson clarified that individual end users can go turn it off themselves in their own Gmail settings. They pointed to these instructions where users disable “smart features.”
💡
Do you know anything else about how Google is using AI? I would love to hear from you. Using a non-work device, you can message me securely on Signal at +44 20 8133 5190. Otherwise, send me an email at joseph@404media.co.
But it looks like it’s all or nothing. You can’t turn off just the new Gemini stuff without also disabling things like Gmail nudging you about an email you received a few days ago, or automatic filtering when Gmail puts emails into primary, social, and promotion tabs, which are features that Gmail has had for years and which many users are probably used to.
On iOS, you go to settings, data privacy, then turn off “Smart features and personalization.” A warning then says you’re about to turn off all the other stuff too that I mentioned above and much more. On Android, you go to settings, general, and then “Google Workspace smart features.”
Turning these off doesn’t actually get rid of the Gemini button at the top right of the inbox. It just means when you do click it (maybe by accident because it’s right next to the button to switch to a different inbox), it’ll prompt you to once again turn on smart features. It does get rid of the summarize this email button, though.
My first thought when I saw the “Summarize this email” button was, oh god, people are going to be submitting all sorts of sensitive, confidential business information into Gemini. We’ve already seen that with ChatGPT, and organizations have to write policies to stop employees doing it. And now you’re making that process one click, directly in the inbox? In its Privacy Hub page, Google says “Your content is not used for any other customers. Your content is not human reviewed or used for Generative AI model training outside your domain without permission.” I do not know if I have given permission or not, though, that’s part of the problem.
“You’ll see these end user settings will become even clearer and easier for people to use in the coming days as we’re rolling out updates (happening now) with language that’s specific to Gemini in Workspace features,” the spokesperson told me.
With Meta’s recent speech policy changes regarding immigration, in which the company will allow people to call immigrants pieces of trash, Mark Zuckerberg is laying the narrative groundwork for President-elect Trump’s planned mass deportations of people from the United States.
Multiple speech and content moderation experts 404 Media spoke to drew some parallels between these recent changes and when Facebook contributed to a genocide in Myanmar in 2017, in which Facebook was used to spread anti-Rohingya hate and the country’s military ultimately led a campaign of murder, torture, and rape against the Muslim minority population. Although there are some key differences, Meta’s changes in the U.S. will also likely lead to the spread of more hate speech across Meta’s sites, with the real world consequences that can bring.
“We believe Meta is certainly opening up their platform to accept harmful rhetoric and mold public opinion into accepting the Trump administration's plans to deport and separate families,” Citlaly Mora, director of communications at Just Futures Law, a legal and advocacy organization focused on issues around deportation and surveillance.
We've got much more on what is happening inside Meta with the company's recent speech policy changes. Jason runs us through it. After the break, Joseph explains how thousands of apps have been hijacked to steal your location data, possibly without the app developers' knowledge. In the subscribers-only section, we talk about various stories intersecting with the LA fires, such as Amazon delivery drivers and AI images. (YouTube version to come shortly.)
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
It’s that time again! We’re planning our latest FOIA Forum, a live, hour-long or more interactive session where Joseph and Jason will teach you how to pry records from government agencies through public records requests. We’re planning this for Thursday, 23rd January at 1 PM Eastern. Add it to your calendar!
So, what’s the FOIA Forum? We'll share our screen and show you specifically how we file FOIA requests. We take questions from the chat and incorporate those into our FOIAs in real-time. We’ll also check on some requests we filed last time. This time we're particularly focusing on how to use FOIA in the new Trump administration. We'll talk all about local, state, and federal agencies; tricks for getting the records you want; requesting things you might not have thought of; and how to apply when the federal government tries to withhold those records.
If this will be your first FOIA Forum, don’t worry, we will do a quick primer on how to file requests (although if you do want to watch our previous FOIA Forums, the video archive is here). We really love talking directly to our community about something we are obsessed with (getting documents from governments) and showing other people how to do it too.
Paid subscribers can already find the link to join the livestream below. We'll also send out a reminder a day or so before. Not a subscriber yet? Sign up now here in time to join.
We've got a bunch of FOIAs that we need to file and are keen to hear from you all on what you want to see more of. Most of all, we want to teach you how to make your own too. Please consider coming along!
A hacker compromised an administrative account on the website for popular game Path of Exile 2, which allowed them to reset the passwords on dozens of players’ accounts, according to comments from developer Grinding Gear Games (GGG) made during a podcast on Sunday. This access would have given the hacker the ability to steal powerful and rare items from those players, with some players spending hundreds of hours grinding for valuable in-game currency.
The news comes after a wave of Path of Exile 2 players complained on the game’s forums and social media about being hacked and their inventories emptied. The comments also show how the hacker compromised the account shortly before the game’s launch, seemingly laying in wait for players to build up their stashes of items before pulling off their heist.
“We totally fucked up here,” Path of Exile 2 game director Jonathan Rogers said during a podcast recording with action roleplaying game (ARPG) content creators GhazzyTV and Darth Microtransaction.
Some of the world’s most popular apps are likely being co-opted by rogue members of the advertising industry to harvest sensitive location data on a massive scale, with that data ending up with a location data company whose subsidiary has previously sold global location data to US law enforcement.
The thousands of apps, included in hacked files from location data company Gravy Analytics, include everything from games like Candy Crush to dating apps like Tinder, to pregnancy tracking and religious prayer apps across both Android and iOS. Because much of the collection is occurring through the advertising ecosystem—not code developed by the app creators themselves—this data collection is likely happening both without users’ and even app developers’ knowledge.
“For the first time publicly, we seem to have proof that one of the largest data brokers selling to both commercial and government clients, appears to be acquiring their data from the online advertising ‘bid stream,’” rather than code embedded into the apps themselves, Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, tells 404 Media after reviewing some of the data.
The data provides a rare glimpse inside the world of real-time bidding (RTB). Historically, location data firms paid app developers to include bundles of code that collected the location data of their users. Many companies have turned instead to sourcing location information through the advertising ecosystem, where companies bid to place ads inside apps. But a side effect is that data brokers can listen in on that process, and harvest the location of peoples’ mobile phones.
“This is a nightmare scenario for privacy because not only does this data breach contain data scraped from the RTB systems, but there's some company out there acting like a global honey badger, doing whatever it pleases with every piece of data that comes its way,” Edwards adds.
We're back! And holy moly what a start to the year. We just published a bunch of stories. First, Jason talks about blowback inside Meta to its new board member, and Meta's subsequent censoring of those views. We also chat about those mad Meta AI profiles. After the break, Sam explains why Pornhub is blocked in most of the U.S. south. In the subscribers-only section, Joseph talks about why the government is planning to name one of its most important (and at risk) witnesses.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples’ precise movements, and they are threatening to publish the data publicly.
The news is a crystalizing moment for the location data industry. For years, companies have harvested location information from smartphones, either through ordinary apps or the advertising ecosystem, and then built products based on that data or sold it to others. In many cases, those customers include the U.S. government, with arms of the military, DHS, the IRS, and FBI using it for various purposes. But collecting that data presents an attractive target to hackers.
“A location data broker like Gravy Analytics getting hacked is the nightmare scenario all privacy advocates have feared and warned about. The potential harms for individuals is haunting, and if all the bulk location data of Americans ends up being sold on underground markets, this will create countless deanonymization risks and tracking concerns for high risk individuals and organizations,” Zach Edwards, senior threat analyst at cybersecurity firm Silent Push, and who has followed the location data industry closely, told 404 Media. “This may be the first major breach of a bulk location data provider, but it won't be the last.”
Telegram, the popular social network and messaging application which has also become a hotbed for all sorts of serious criminal activity, provided U.S. authorities with data on more than 2,200 users last year, according to newly released data from Telegram.
The news shows a massive spike in the number of data requests fulfilled by Telegram after French authorities arrested Telegram CEO Pavel Durov in August, in part because of the company’s unwillingness to provide user data in a child abuse investigation. Between January 1 and September 30, 2024, Telegram fulfilled 14 requests “for IP addresses and/or phone numbers” from the United States, which affected a total of 108 users, according to Telegram’s Transparency Reports bot. But for the entire year of 2024, it fulfilled 900 requests from the U.S. affecting a total of 2,253 users, meaning that the number of fulfilled requests skyrocketed between October and December, according to the newly released data.
“Fulfilled requests from the United States of America for IP address and/or phone number: 900,” Telegram’s Transparency Reports bot said when prompted for the latest report by 404 Media. “Affected users: 2253,” it added.
Members of an underground criminal community that hack massive companies, steal swathes of cryptocurrency, and even commission robberies or shootings against members of the public or one another have an unusual method for digging up personal information on a target: the truck and trailer rental company U-Haul. With access to U-Haul employee accounts, hackers can lookup a U-Haul customer’s personal data, and with that try to social engineer their way into the target’s online accounts. Or potentially target them with violence too.
The news shows how members of the community, known as the Com and composed of potentially a thousand people who coalesce on Telegram and Discord, use essentially any information available to them to dox or hack people, no matter how obscure. It also provides context as to why U-Haul may have been targeted repeatedly in recent years, with the company previously disclosing multiple data breaches.
“U-Haul has lots of information, it can be used for all sorts of stuff. One of the primary cases is for doxing targs [targets] since they [seem] to have information not found online and ofc U-Haul has confirmed this info with the person prior,” Pontifex, the administrator of a phishing tool which advertises the ability to harvest U-Haul logins, told 404 Media in an online chat. The tool, called Suite, also advertises phishing pages for Gmail, Coinbase, and the major U.S. carriers T-Mobile, AT&T, and Verizon.
This is Behind the Blog, where we share our behind-the-scenes thoughts about how a few of our top stories of the week came together. This week, we talk more about magic links and building shelves offline. A light Behind the Blog today but we're back from the holiday on Monday.
JOSEPH: There has been a lot of response to our post We Don’t Want Your Password. Much of it supportive, some of it mad, some of it funny. The TLDR is (although I do think it’s worth a read) is that we’re four journalists trying to spend as much time as possible doing actual journalism, rather than spending our very limited amount of time building things that are not necessary and that we’re not equipped to do. We do want to build, like our big project for a fulltext RSS feed for paying subscribers and for the broader independent media ecosystem, but we’re not interested in using up resources (time, mostly) on introducing a username/password login for the site when the current magic link system works mostly fine and is how the CMS we use is designed.
Here's a special year in review episode of the 404 Media Podcast! We riff on the last year in AI, media, journalism, and more. We'll be back with a normal news show in the new year!
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
The Secret Service never actually checked whether people gave proper consent to be tracked by a mobile phone location monitoring tool, despite claiming the data was collected with peoples’ permission, the agency admitted in an email obtained by 404 Media.
The email undermines the Secret Service’s and other U.S. federal agencies' justification that monitoring the movements of phones with commercially available location data without a warrant is possible because people allegedly agreed to the terms of services of ordinary apps that may collect it. The news also comes after the Federal Trade Commission (FTC) banned Venntel, the company that provided the underlying dataset for the surveillance tool used by the Secret Service, from selling sensitive location data, and alleged that it did not obtain that consent in multiple cases. The tool used by the Secret Service is called Locate X, which is made by a company called Babel Street.
In the 2022 email, the office of Senator Ron Wyden asked the Secret Service what steps it had taken to verify that the location data it purchased from Babel Street was obtained from consumers who consented to “the onwards sale and sharing of the data.” Venntel collates location data from a variety of sources, including apps installed on peoples’ phones such as weather or navigation tools. The Secret Service’s one word response to that question read “None,” according to a copy of the email Wyden’s office shared with 404 Media.
Hello! Here's a holiday gift: an episode of the 404 Media Podcast that was previously only for paying subscribers! It gives a lot more context on the how and why we cover AI they way we do. Here's the original description of the episode:
We got a lot of, let's say, feedback, with some of our recent stories on artificial intelligence. One was about people using Bing's AI to create images of cartoon characters flying a plane into a pair of skyscrapers. Another was about 4chan using the same tech to quickly generate racist images. Here, we use that dialogue as a springboard to chat about why we cover AI the way we do, the purpose of journalism, and how that relates to AI and tech overall. This was fun, and let us know what you think. Definitely happy to do more of these sorts of discussions for our subscribers in the future.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
A lawyer defending an alleged distributor of Anom, the encrypted phone company for criminals that the FBI secretly ran and backdoored to intercept tens of millions of messages, is pushing to learn the identity of the confidential human source (CHS) who first created Anom and provided it to the FBI starting the largest sting operation in history, according to recently filed court records. The government says it will provide that identity under discovery, but the CHS may also be revealed in open court if they testify.
The move is significant in that the CHS, who used the pseudonym Afgoo while running Anom, is a likely target for retaliation from violent criminals caught in Anom’s net. The Anom case, called Operation Trojan Shield, implicated hundreds of criminal syndicates in more than 100 countries. That includes South American cocaine traffickers, Australian biker gangs, and kingpins hiding in Dubai. Anom also snagged specific significant drug traffickers like Hakan Ayik, who authorities say heads the Aussie Cartel which brought in more than a billion Australian dollars in profit annually.
Court records say, however, that if this defendant’s case goes to trial, the lawyer believes Afgoo will be the “government’s key witness.”
This week Jason, as both a drones and aliens reporter, tells us what is most likely happening with the mysterious drones flying over New Jersey. After the break, Joseph explains how cops in Serbia are using Cellebrite phone unlocking tech as a doorway to installing malware on activists' and journalists' phones. In the subscribers-only section, Sam tells us all about an amazing art project using traffic cameras in New York City.
Listen to the weekly podcast on Apple Podcasts,Spotify, or YouTube. Become a paid subscriber for access to this episode's bonus content and to power our journalism. If you become a paid subscriber, check your inbox for an email from our podcast host Transistor for a link to the subscribers-only version! You can also add that subscribers feed to your podcast app of choice and never miss an episode that way. The email should also contain the subscribers-only unlisted YouTube link for the extended video version too. It will also be in the show notes in your podcast player.
The Department of Homeland Security (DHS) believes that China, Russia, Iran, and Israel are the “primary” countries exploiting security holes in telecommunications networks to spy on people inside the United States, which can include tracking their physical movements and intercepting calls and texts, according to information released by Senator Ron Wyden.
The news provides more context around use of SS7, the exploited network and protocol, against phones in the country. In May, 404 Media reported that an official inside DHS’s Cybersecurity Infrastructure and Security Agency (CISA) broke with his department’s official narrative and publicly warned about multiple SS7 attacks on U.S. persons in recent years. Now, the newly disclosed information provides more specifics on where at least some SS7 attacks are originating from.
The information is included in a letter the Department of Defense (DoD) wrote in response to queries from the office of Senator Wyden. The letter says that in September 2017 DHS personnel gave a presentation on SS7 security threats at an event open to U.S. government officials. The letter says that Wyden staff attended the event and saw the presentation. One slide identified the “primary countries reportedly using telecom assets of other nations to exploit U.S. subscribers,” it continues.
Authorities in Serbia have repeatedly used Cellebrite tools to unlock mobile phones so they could then infect them with potent malware, including the phones of activists and a journalist, according to a new report from human rights organization Amnesty International.
The report is significant because it shows that although Cellebrite devices are typically designed to unlock or extract data from phones that authorities have physical access to, they can also be used to open the door for installing active surveillance technology. In these cases, the devices were infected with malware and then returned to the targets. Amnesty also says it, along with researchers at Google, discovered a vulnerability in a wide spread of Android phones which Cellebrite was exploiting. Qualcomm, the impacted chip manufacturer, has since fixed that vulnerability. And Amnesty says Google has remotely wiped the spyware from other infected devices.
“I am concerned by the way police behave during the incident, especially the way how they took/extracted the data from my mobilephone without using legal procedures. The fact that they extracted 1.6 GB data from my mobilephone, including personal, family and business information as well as information about our associates and people serving as a ‘source of information’ for journalist research, is unacceptable,” Slaviša Milanov, deputy editor and journalist of Serbian outlet FAR and whose phone was targeted in such a way, told 404 Media. Milanov covers, among other things, corruption.