X is raising prices for its top-tier subscription service by 37.5%, marking the largest price increase since the platform’s acquisition by Elon Musk in 2022. The Premium+ service costs $22 monthly in the U.S., up from $16, effective December 21, according to a company statement. Annual subscription cost has increased to $229 from $168. X […]
Elon Musk has sparked backlash in Germany after calling for the chancellor to resign and backing the AfD.
The German health minister said Musk "should not interfere in our politics."
It comes as right-wing leaders in Europe seize on an attack on a Christmas market in Magdeburg, Germany.
Elon Musk has stirred controversy in Germany after calling Chancellor Olaf Scholz an "incompetent fool" and backing the country's far-right Alternative for Germany (AfD) party.
In a post on X, Musk first reshared a video by right-wing influencer Naomi Seibt in which she criticizes Friedrich Merz, one of the leading candidates to become Germany's next chancellor.
"Only the AfD can save Germany," Musk, who is the richest person in the world, wrote alongside the post.
Musk reshared a post purportedly showing an image of the suspect that said the attack was a "DIRECT RESULT of mass unchecked immigration."
"Scholz should resign immediately. Incompetent fool," Musk added in a separate post.
Leading right-wing figures across Europe have seized on the incident to promote anti-immigrant rhetoric and call for tighter border controls.
Musk's comments, which come just two months before Germany is set to hold a snap federal election, have sparked backlash in the country.
Scholz appeared to respond indirectly at a press conference in Berlin, saying, "We have freedom of speech here. That also applies to multimillionaires. Freedom of speech also means that you're able to say things that aren't right and do not contain good political advice," per the Guardian.
Karl Lauterbach, the German health minister, said on X that Musk "should not interfere in our politics, adding that "his platform profits from hate and incitement and radicalizes people."
The AfD party was established in 2013 as an anti-euro party, but it has since focused more on immigration and has been seen as increasingly far-right.
Musk, however, has previously questioned how far-right the party's policies are.
In a post on X in June, he wrote:"Why is there such a negative reaction from some about AfD?"
"They keep saying "far right", but the policies of AfD that I've read about don't sound extremist. Maybe I'm missing something," he added.
The Tesla CEO has shown growing support for right-wing leaders, including Italy's Prime Minister Giorgia Meloni and Nigel Farage, leader of the UK's Reform Party.
Earlier this week, Farage boasted that Musk was "right behind" him and hinted that the tech mogul might financially back his party.
How did Elon Musk go from being an Obama supporter to a self-described "dark MAGA" Trump ally? Here's a look at the relationship between two billionaires ahead of the second Trump presidency.
For months, popular fighting game YouTubers have been under attack. Even the seemingly most cautious among them have been duped by sophisticated phishing attacks that hack their accounts to push cryptocurrency scams by convincingly appearing to offer legitimate sponsorships from established brands.
These scams often start with bad actors seemingly taking over verified accounts on X (formerly Twitter) with substantial followings and then using them to impersonate marketing managers at real brands who can be easily found on LinkedIn.
The fake X accounts go to great lengths to appear legitimate. They link to brands' actual websites and populate feeds with histories seemingly spanning decades by re-posting brands' authentic posts.
A State Department agency – which has been chided by conservatives for its alleged blacklisting of Americans and news outlets – is set to be refunded in the continuing resolution (CR) bill currently being hammered out among lawmakers on Capitol Hill.
The Global Engagement Center has been included in page 139 of the CR. Although it doesn’t specify its budget allocation, a previous Inspector General report shows the agency’s FY 2020 budget totaled $74.26 million, of which $60 million was appropriated by Congress.
The provision in the CR can be found under "Foreign Affairs Section 301. Global Engagement Center Extension," and comes despite the State Department saying in response to a lawsuit that it intended to shut down the agency by next week.
The GEC, according to reporter Matt Taibbi, "funded a secret list of subcontractors and helped pioneer and insidious—and idiotic—new form of blacklisting" during the pandemic.
Taibbi wrote last year when exposing the Twitter Files that the GEC "flagged accounts as ‘Russian personas and proxies’ based on criteria like, ‘Describing the Coronavirus as an engineered bioweapon,’ blaming ‘research conducted at the Wuhan institute,’ and ‘attributing the appearance of the virus to the CIA.’"
"State also flagged accounts that retweeted news that Twitter banned the popular U.S. website ZeroHedge, claiming it 'led to another flurry of disinformation narratives.'" ZeroHedge had made reports speculating that the virus had a lab origin.
Elon Musk previously described the GEC as being the "worst offender in US government censorship & media manipulation."
"They are a threat to our democracy," Musk wrote in a subsequent tweet.
The GEC is part of the State Department but also partners with the Federal Bureau of Investigation, the Central Intelligence Agency, the National Security Agency, the Defense Advanced Research Projects Agency, the Special Operations Command and the Department of Homeland Security. The GEC also funds the Atlantic Council's Digital Forensic Research Lab (DFRLab).
Taibbi offered various instances in which the DFRLab and the GEC sent Twitter a list of accounts they believed were engaged in "state-backed coordinated manipulation." However, a quick glance from Twitter employees determined that the list was shoddy and included the accounts of multiple American citizens with seemingly no connection to the foreign entity in question.
DFRLab Director Graham Brookie previously denied the claim that they use tax money to track Americans, saying its GEC grants have "an exclusively international focus."
A 2024 report from the Republican-led House Small Business Committee criticized the GEC for awarding grants to organizations whose work includes tracking domestic as well as foreign misinformation and rating the credibility of U.S.-based publishers, according to the Washington Post.
The State Department, in response to a lawsuit, said it intended to shut down the agency on Dec. 23. But the CR provision means, if passed, it will continue to operate.
The lawsuit was brought by Texas Attorney General Ken Paxton, the Daily Wire and the Federalist, who sued the State Department, Secretary of State Antony Blinken, and other government officials earlier this month for "engaging in a conspiracy to censor, deplatform and demonetize American media outlets disfavored by the federal government."
The lawsuit stated that the GEC was used as a tool for the defendants to carry out its censorship.
"Congress authorized the creation of the Global Engagement Center expressly to counter foreign propaganda and misinformation," the Texas Attorney General’s Office said in a press release. "Instead, the agency weaponized this authority to violate the First Amendment and suppress Americans’ constitutionally-protected speech.
The complaint describes the State Department’s project as "one of the most egregious government operations to censor the American press in the history of the nation.’"
The lawsuit argued that The Daily Wire, The Federalist, and other conservative news organizations were branded "unreliable" or "risky" by the agency, "starving them of advertising revenue and reducing the circulation of their reporting and speech—all as a direct result of [the State Department’s] unlawful censorship scheme."
Meanwhile, America First Legal, headed up by Stephen Miller, President-elect Trump’s pick for deputy chief of staff for policy, revealed that the GEC used taxpayer dollars to create a video game called "Cat Park" to "Inoculate Youth Against Disinformation" abroad.
The game "inoculates players … by showing how sensational headlines, memes, and manipulated media can be used to advance conspiracy theories and incite real-world violence," according to a memo obtained by America First Legal.
Mike Benz, the executive director at the Foundation For Freedom Online, said the game was "anti-populist" and pushed certain political beliefs instead of protecting Americans from foreign disinformation, per the Tennessee Star.
A State Department spokesperson said the agency does not comment on pending legislation when asked for comment by Fox News Digital.
Fox News Digital reached out to the GEC for comment on its potential refunding but did not immediately receive a response.
Fox News Nikolas Lanum and Louis Casiano contributed to this report.
Generative AI tools have made it easier to create fake images, videos, and audio.
That sparked concern that this busy election year would be disrupted by realistic disinformation.
The barrage of AI deepfakes didn't happen. An AI researcher explains why and what's to come.
Oren Etzioni has studied artificial intelligence and worked on the technology for well over a decade, so when he saw the huge election cycle of 2024 coming, he got ready.
India, Indonesia, and the US were just some of the populous nations sending citizens to the ballot box. Generative AI had been unleashed upon the world about a year earlier, and there were major concerns about a potential wave of AI-powered disinformation disrupting the democratic process.
"We're going into the jungle without bug spray," Etzioni recalled thinking at the time.
He responded by starting TrueMedia.org, a nonprofit that uses AI-detection technologies to help people determine whether online videos, images, and audio are real or fake.
The group launched an early beta version of its service in April, so it was ready for a barrage of realistic AI deepfakes and other misleading online content.
In the end, the barrage never came.
"It really wasn't nearly as bad as we thought," Etzioni said. "That was good news, period."
He's still slightly mystified by this, although he has theories.
First, you don't need AI to lie during elections.
"Out-and-out lies and conspiracy theories were prevalent, but they weren't always accompanied by synthetic media," Etzioni said.
Second, he suspects that generative AI technology is not quite there yet, particularly when it comes to deepfake videos.
"Some of the most egregious videos that are truly realistic — those are still pretty hard to create," Etzioni said. "There's another lap to go before people can generate what they want easily and have it look the way they want. Awareness of how to do this may not have penetrated the dark corners of the internet yet."
One thing he's sure of: High-end AI video-generation capabilities will come. This might happen during the next major election cycle or the one after that, but it's coming.
With that in mind, Etzioni shared learnings from TrueMedia's first go-round this year:
Democracies are still not prepared for the worst-case scenario when it comes to AI deepfakes.
There's no purely technical solution for this looming problem, and AI will need regulation.
Social media has an important role to play.
TrueMedia achieves roughly 90% accuracy, although people asked for more. It will be impossible to be 100% accurate, so there's room for human analysts.
It's not always scalable to have humans at the end checking every decision, so humans only get involved in edge cases, such as when users question a decision made by TrueMedia's technology.
The group plans to publish research on its AI deepfake detection efforts, and it's working on potential licensing deals.
"There's a lot of interest in our AI models that have been tuned based on the flurry of uploads and deepfakes," Etzioni said. "We hope to license those to entities that are mission-oriented."
In the early days of the pandemic, Josh Kramer and his wife set up a Discord server to stay in touch with their friends. Branched off from the main group of about 20 people are different channels for topics — like AI and crypto, which took over a channel previously devoted to "Tiger King," and another called "sweethomies" to talk about their houses and apartments — that only some people might want to be notified about to avoid annoying everyone all the time. Now, more than four years later, it's become "essential" for the extended friend group, says Kramer, seeing them through the early anxiety of COVID-19 and two presidential elections.
While the chat is made up of friendly faces, it's not really an echo chamber — not everyone has the same ideology or political opinions, Kramer tells me. But it's more productive than screaming into the void on social media. Now, when he has a thought that may have turned into a tweet, he instead takes it to the group, where it can become a conversation.
"It's a way to have conversations about complicated issues, like national politics, but in context with people I actually know and care about," Kramer, who is the head of editorial at New_ Public, a nonprofit research and development lab focusing on reimagining social media, tells me. The success of the server has also informed how he thinks about ways to reform the social web. On election night, for example, using the group chat was less about scoring points with a quippy tweet and "more about checking in with each other and commiserating about our experience, rather than whatever you might take to Twitter to talk about to check in with the broader zeitgeist."
In the month or so since the 2024 election, thousands have abandoned or deactivated their X accounts, taking issue with Elon Musk's move to use the platform as a tool to reelect Donald Trump, as they seek new ways to connect and share information. Bluesky, which saw its users grow 110% in November according to market intelligence firm Sensor Tower, has emerged as the most promising replacement among many progressives, journalists, and Swifties, as it allows people to easily share links and doesn't rely as heavily on algorithmic delivery of posts as platforms like Facebook, X, and TikTok have come to. But some are turning further inward to smaller group chats, either via text message or on platforms like Discord, WhatsApp, and Signal, where they can have conversations more privately and free of algorithmic determinations.
It's all part of the larger, ongoing fracturing of our social media landscape. For a decade, Twitter proved to be the room where news broke. Other upstarts launched after Elon Musk bought the platform in 2022 and tried to compete, luring people with promises of moderation and civility, but ultimately folded, largely because they weren't very fun or lacked the momentum created by the kind of power users that propelled the old Twitter. But for many, there's still safety in the smaller group chats, which take the form of your friends who like to shit talk in an iMessage chain or topic-focused, larger chats on apps like Discord or WhatsApp.
"Group chats have been quite valued," Kate Mannell, a research fellow with the ARC Centre of Excellence for the Digital Child at Deakin University in Australia, tells me. They allow people to chat with selected friends, family members, or colleagues to have much "more context-specific kinds of conversations, which I think is much more reflective of the way that our social groups actually exist, as opposed to this kind of flattening" that happens on social media. When people accumulate large followings on social media, they run into context collapse, she says. The communication breakdown happens as the social platforms launched in the 2000s have taken on larger lives than anyone anticipated.
The candid nature of group chats gives them value and tethers people with looser connections together, but that can also make them unwieldy.
By contrast, some more exclusive chats are seen as cozy, safe spaces. Most of Discord's servers are made up of fewer than 10 people, Savannah Badalich, the senior director of policy at Discord, tells me. The company has 200 million active users, up from 100 million in 2020. What started as a place to hang with friends while playing video games still incentivizes interacting over lurking or building up big followings. "We don't have that endless doomscrolling," Badalich says. "We don't have that place where you're passively consuming content. Discord is about real-time interaction." And interacting among smaller groups may be more natural. Research by the psychologist Robin Dunbar in the 1990s found that humans could cognitively maintain about 150 meaningful relationships. More recent research has questioned that determination, but any person overburdened by our digital age can surely tell you that you can only show up authentically and substantially in person for a small subset of the people you follow online. A 2024 study, conducted by Dunbar and the telecommunications company Vodafone, found that the average person in the UK was part of 83 group chats across all platforms, with a quarter of people using group chats more often than one-to-one messages.
In addition to hosting group chats, WhatsApp has tried more recently to position itself as a place for news, giving publishers the ability to send headlines directly to followers. News organizations like MSNBC, Reuters, and Telemundo have channels. CNN has nearly 15 million followers, while The New York Times has about 13 million. Several publishers recently told the Times that they were seeing growth and traffic come from WhatsApp, but the channels have yet to rival sources like Google or Facebook. While it gives them the power to connect to readers, WhatsApp is owned by Meta, which has a fraught history of hooking media companies and making them dependent on traffic on its social platforms only to later de-emphasize their content.
Victoria Usher, the founder and CEO of Gingermay, a B2B tech communications firm, says she's in several large, business-focused group chats on WhatsApp. Usher, who lives in the UK, even found these chats were a way to get news about the US election "immediately." In a way, the group chats are her way of optimizing news and analysis of it, and it works because there's a deep sense of trust between those in the chat that doesn't exist when scrolling X. "I prefer it to an algorithm," she says. "It's going to be stories that I will find interesting." She thinks they deliver information better than LinkedIn, where people have taken to writing posts in classic LinkedIn style to please the algorithm — which can be both self-serving and cringe. "It doesn't feel like it's a truthful channel," Usher says. "They're trying to create a picture of how they want to be seen personally. Within WhatsApp groups or Signal, people are much more likely to post what they actually feel about something."
The candid nature of group chats — which some have called the last safe spaces in society today — gives them value and tethers people with looser connections together, but that can also make them unwieldy. Some of the larger group chats, like those on Discord, have moderation and rules. But when it comes to just chatting with your friends or family, there's largely no established group-chat etiquette. Group chats can languish for years; there's no playbook for leaving or kicking out someone who's no longer close to the core group. If a couple breaks up, who gets the group chat? How many memes is a person allowed to send a day? What happens when the group texts get leaked? There's often "no external moderator to come in and say, 'That's not how we do things,'" Mannell says.
Kramer, while he likes his Discord chat, is optimistic about the future of groups and new social networks. He says he's also taken over a community Facebook group for his neighborhood that was inactive and made more connections with his neighbors. We're in a moment where massive change could come to our chats and our social networks. "There's been a social internet for 30 years," says Kramer. But there's "so much room for innovation and new exciting and alternative options." But his group chat might still have the best vibes of all. Messaging there "has less to do with being right and scoring points" than on social media, he says. "It has so much value to me on a personal level, as a place of real support."
Amanda Hoover is a senior correspondent at Business Insider covering the tech industry. She writes about the biggest tech companies and trends.
Reid Hoffman said he'd faced threats after Elon Musk fueled a baseless conspiracy theory about him.
Musk has amplified claims that the LinkedIn cofounder was a client of Jeffrey Epstein.
Hoffman said he regretted his past association with Epstein and had hired security after threats.
Reid Hoffman, a cofounder of LinkedIn, said he had received threats of violence — and had to hire security — since Elon Musk fueled a baseless conspiracy theory about him.
Musk, the Tesla CEO who worked with Hoffman at PayPal, replied earlier this month to an X post in which a user implied Hoffman had visited the sex offender Jeffrey Epstein's private island.
He replied with the "100" emoji to a post saying: "This guy is TERRIFIED about Trump releasing the Epstein Client list after all his visits to Epstein Island."
Musk also made the claim during an October interview with the former Fox News anchor Tucker Carlson, in which he said Hoffman was among the "billionaires behind Kamala" who were "terrified" by the prospect of Epstein's client list being made public.
Speaking with the British newspaper The Sunday Times, Hoffman said Musk had developed a "conviction with no evidence" that he had a close relationship with Epstein.
"Elon's defamation makes me angry and sad," he said. "Angry because it is an ugly assault. Sad because it comes from someone whose entrepreneurial achievements I continue to admire."
He added that he didn't want to "dignify" the threats he had received by sharing any details but said, "I've hired security staff as a result."
After Epstein's suicide in jail in 2019, Hoffman apologized for inviting him to a dinner party in 2015 with other tech tycoons — including Musk, Facebook CEO Mark Zuckerberg, and Palantir's cofounder Peter Thiel — while fundraising for MIT's renowned Media Lab.
Hoffman said he was told Epstein's involvement in raising donations had been vetted and approved by MIT. But he later wrote in an email to Axios that he regretted not conducting his own research into Epstein, who died while awaiting trial on sex-trafficking charges.
"My last interaction with Epstein was in 2015," Hoffman said in the email. "Still, by agreeing to participate in any fundraising activity where Epstein was present, I helped to repair his reputation and perpetuate injustice. For this, I am deeply regretful."
He told The Sunday Times that he "went to no Epstein parties" and that he "didn't even know who he was."
Hoffman is a major Democratic donor who used X to voice his support for Vice President Kamala Harris in the presidential election. "My message for American voters and Russian bots: don't vote for the guy too busy selling you a scamcoin," he wrote in a post on X on Election Day. Donald Trump, then the Republican presidential nominee, launched his own crypto coin, World Liberty Financial's wlfi, in October.
Elon Musk shared a legal letter to X which said Neuralink faces a probe by the SEC.
He shared the letter in a series of posts attacking and mocking SEC Chair Gary Gensler.
Musk wrote, "Oh Gary, how could you do this to me?"
Elon Musk has revealed that Neuralink, his brain-chip implant company, is facing a probe from the Securities and Exchange Commission, with which he has long feuded.
Musk posted a letter on the subject to X Thursday, as well as a mocking, AI-generated image of SEC chair Gary Gensler. He called the SEC "just another weaponized institution doing political dirty work."
"Oh Gary, how could you do this to me?" Musk wrote in the post sharing the letter from his lawyer, Alex Spiro, to Gensler.
In the letter, which was said it was "in the matter of certain purchases, sales, and disclosures of Twitter shares," Spiro said the SEC "reopened" an investigation into Neuralink but didn't elaborate on why. It also said the SEC was preparing action against Musk over his 2022 acquisition of Twitter, now X.
The billionaire later shared another post featuring an AI-generated image of a snail wearing a business suit and said it depicted Gensler.
Neuralink and the SEC didn't immediately respond to requests for comment from Business Insider, made outside normal working hours.
The SEC is investigating how Musk bought shares in Twitter ahead of his $44 billion acquisition of the social network.
Musk started buying shares in Twitter in 2022, and by the spring, he had a 9% stake in the company before he struck a deal to buy it outright later in the year.
Spiro, Musk's lawyer, also said in the letter that the SEC issued a "settlement demand" on Wednesday to agree within 48 hours to make a payment or face enforcement action.
Spiro wrote that this followed "a multi-year investigation and more than six years of harassment" of Musk by the SEC.
This is an apparent reference to the SEC suing Musk in 2018 over a tweet in which he claimed he had the funding to take Tesla private, which led to him being forced to step down as chairman.
It's one of the least used major social media sites among US teenagers, followed only by Reddit and Threads, according to a new study published by the Pew Research Center.
The Washington DC-based think tank surveyed nearly 1,400 teenagers between September and October to collect the data, which showed that 17% of teen respondents said they use X, a six-point decrease from 2022 when 23% of surveyed teenagers said they used the site.
Elon Musk purchased X, formerly Twitter, in 2022.
Representatives for X did not respond to a request for comment from Business Insider.
Other popular social media sites also saw a decline in use among teens.
YouTube, owned by Google, attracted the highest percentage of teenage users despite falling from 95% to 90% from 2022 to 2024. ByteDance's TikTok came in second place with 63% of respondents saying they used the app, compared to 67% two years ago.
Snap Inc.'s Snapchat recorded 55%, another slight decline from 59% in 2022.
Instagram, owned by Meta, was used by 61%, about the same as two years ago, while Meta's Facebook also held steady at 32%. Reddit also remained consistent, with 14% of teens saying they used the app, the same as 2022.
Threads, which Meta launched in 2023, was used by 6% of teens.
There was only one social media site that grew in popularity with teens over the past two years: WhatsApp.
The Meta-owned messaging app went from 17% of teens saying they used it in 2022 to 23% this year — overtaking X in teenage users, according to the Pew surveys.
Meta, then Facebook Inc., bought WhatsApp for $22 billion in 2014, an investment that the company says is finally paying off.
On Meta's quarter-three earnings call in November, the company reported a 48% year-over-year increase in non-advertising revenue that was largely attributed to WhatsApp.
The revenue boost was mostly due to the app's product that allows businesses to pay to chat directly with customers.
But WhatsApp is also known to be great for large group chats, which have become increasingly popular with teens.
They've recommended countless books over the years that they credit with strengthening their business acumen and shaping their worldviews.
Here are 20 books recommended by Musk, Bezos, and Gates to add to your reading list:
Jeff Bezos
Some of Bezos' favorite books were instrumental to the creation of products and services like the Kindle and Amazon Web Services.
"The Innovator's Solution"
This book on innovation explains how companies can become disruptors. It's one of three books Bezos made his top executives read one summer to map out Amazon's trajectory.
"The Goal: A Process of Ongoing Improvement"
Also on that list was "The Goal," in which Eliyahu M. Goldratt and Jeff Cox examine the theory of constraints from a management perspective.
The final book on Bezos' reading list for senior managers, "The Effective Executive" lays out habits of successful executives, like time management and effective decision-making.
"Built to Last: Successful Habits of Visionary Companies"
This book draws on six years of research from the Stanford University Graduate School of Business that looks into what separates exceptional companies from their competitors. Bezos has said it's his "favorite business book."
This Kazuo Ishiguro novel tells of an English butler in wartime England who begins to question his lifelong loyalty to his employer while on a vacation.
Bezos has said of the book, "Before reading it, I didn't think a perfect novel was possible."
The Tesla CEO has recommended several AI books, sci-fi novels, and biographies over the years.
"What We Owe the Future"
One of Musk's most recent picks, this book tackles longtermism, which its author defines as "the view that positively affecting the long-run future is a key moral priority of our time." Musk says the book is a "close match" for his philosophy.
"Superintelligence: Paths, Dangers, Strategies"
Musk has also recommended several books on artificial intelligence, including this one, which considers questions about the future of intelligent life in a world where machines might become smarter than people.
"Life 3.0: Being Human in the Age of Artificial Intelligence"
In this book, MIT professor Max Tegmark writes about ensuring artificial intelligence and technological progress remain beneficial for human life in the future.
"Zero to One: Notes on Startups, or How to Build the Future"
Peter Thiel shares lessons he learned founding companies like PayPal and Palantir in this book.
Musk has said of the book, "Thiel has built multiple breakthrough companies, and Zero to One shows how."
The Microsoft cofounder usually publishes two lists each year, one in the summer and one at year's end, of his book recommendations.
"How the World Really Works"
In his 2022 summer reading list, Gates highlighted this work by Vaclav Smil that explores the fundamental forces underlying today's world, including matters like energy production and globalization.
"If you want a brief but thorough education in numeric thinking about many of the fundamental forces that shape human life, this is the book to read," Gates said of the book.
"Why We're Polarized"
Ezra Klein argues that the American political system has became polarized around identity to dangerous effect in this book, also on Gates' summer reading list in 2022, that Gates calls "a fascinating look at human psychology."
"Business Adventures: Twelve Classic Tales from the World of Wall Street"
Gates has said this is "the best business book I've ever read." It compiles 12 articles that originally appeared in The New Yorker about moments of success and failure at companies like General Electric and Xerox.
"Factfulness: Ten Reasons We're Wrong About the World—and Why Things Are Better Than You Think"
This book investigates the thinking patterns and tendencies that distort people's perceptions of the world. Gates has called it "one of the most educational books I've ever read."
Luigi Mangione, a 26-year-old tech worker, was charged with the killing of UnitedHealthcare CEO Brian Thompson.
The University of Pennsylvania graduate reportedly stopped speaking with friends and family after back surgery last year.
Deleted social media posts show skepticism toward doctors, Donald Trump, and Joe Biden, and support for RFK Jr.
Luigi Mangione, the man charged with the murder of UnitedHealthcare CEO Brian Thompson, seemingly supported Robert F. Kennedy Jr., appeared to harbor frustrations with the medical field, and expressed skepticism toward both Donald Trump and Joe Biden, deleted posts on X show.
Mangione, a 26-year-old software developer who reportedly fell out of touch with friends and family after back surgery last year, reposted Edward Snowden's suggestion that Democrats should nominate Robert F. Kennedy Jr. for president following Joe Biden's disastrous debate performance in June.
darkly amusing to watch panicked dems suddenly searching under the couch cushions for a candidate when kennedy is literally standing right there
The deleted posts, which Business Insider viewed on Archive.org, are among the most recent online clues about Mangione found so far.
Mangione has been described as both an "anti-capitalist" and a member of the "online right." His deleted posts support the idea that his worldview was influenced by reactionary right-wing thinkers.
In another deleted post from May, Mangione reposted another user's skepticism of doctors, adding detail to reports about Mangione's dissatisfaction with the US healthcare system. A former roommate from Hawaii told the Honolulu Civil Beat that Mangione had chronic back pain.
"My experience with the medical profession — and yours is probably similar — is that doctors are basically worthless unless you carefully manage them, and 2/3 of them are worthless even in that case," the post said.
The author of the original post, Zero HP Lovecraft, calls himself a "fascist hipster." His Substack shows he submitted a short story for the Passage Prize, an award run by a publisher known for publishing reactionary and fascist authors.
Mangione also castigated "both parties" in a reply to writer Nate Silver.
"Both parties - Trump with his refusal to accept the results of an election, and Biden with his refusal to accept his age and step down - are simultaneously proving how desperately individuals will cling to power," Mangione posted. He also referred to term limits as "critical."
In June, he reposted a suggestion by Richard Hanania, an author critical of "wokeness," that Trump thought Christians were delusional.
Trump clearly sees Christians the way most adults see kids who still believe in Santa Clause. pic.twitter.com/qZMvbR3yK7
In July, Mangione also reposted a description of Project 2025, a roadmap for Trump's second term developed by the right-wing think tank The Heritage Foundation, as "qanon but for redditors."
On Tuesday, the US Federal Bureau of Investigation advised Americans to share a secret word or phrase with their family members to protect against AI-powered voice-cloning scams, as criminals increasingly use voice synthesis to impersonate loved ones in crisis.
"Create a secret word or phrase with your family to verify their identity," wrote the FBI in an official public service announcement (I-120324-PSA).
For example, you could tell your parents, children, or spouse to ask for a word or phrase to verify your identity if something seems suspicious, such as "The sparrow flies at midnight," "Greg is the king of burritos," or simply "flibbertigibbet." (As fun as these sound, your password should be secret and not the same as these.)
Though the account has since been reinstated, a representative for the company told Fox News Digital that "despite multiple attempts to reach Facebook to discuss the matter, to date we have not had direct communications with any of their staff members."
The gun company, which is headquartered in Maryville, Tennessee, said staff suddenly received a notification from Facebook on Nov. 22 stating that their official Smith & Wesson account had been "suspended indefinitely."
"No warnings of a page suspension were previously communicated by Facebook," said the representative.
The representative said Facebook referenced five posts dating back to December 2023 that they "suggest did not follow their community guidelines."
"The posts in question included consumer promotional campaigns, charitable auctions, and product release announcements," the Smith & Wesson representative explained. "While Facebook’s policies are ever-changing, which creates a burden for users to comply with, we do not believe this content violated any of Facebook’s policies or community guidelines, and similar posts have been published in the past without issue."
Facebook’s commerce policy prohibits the promotion of buying, selling and trading of weapons, ammunition and explosives. However, according to Facebook’s parent company Meta’s website, there is an exception for legitimate brick-and-mortar and online retailers, though their content is still restricted for minors.
According to the representative, the page was reinstated on Nov. 27 after the gun manufacturer made a public statement about the incident on X.
In the post, which has 3.1 million views, Smith & Wesson criticized Facebook and thanked Elon Musk and X for supporting free speech amid what it called ongoing attacks against the First and Second Amendments. The company encouraged its 1.6 million Facebook followers and fans to "seek out platforms" that represent the "shared values" of free speech and the right to bear arms.
Despite the page eventually being reinstated, the representative told Fox News Digital that the company has still had no contact with Meta and "no rationale was given for the reinstatement beyond a comment on social media from a Facebook representative stating that the suspension had been ‘in error.’"
That same Meta staffer, Andy Stone, also directed Fox News Digital to the X post positing that Smith & Wesson’s suspension was an accident. In the post, Stone said "the page was suspended in error and we’ve now restored it. We apologize that this happened."
Through it all, the Smith & Wesson representative said the manufacturer is "grateful to Elon Musk for having created a public square platform that respects the right for Americans to voice their opinions, ALL opinions, and not just those that coincide with one agenda or another – especially as it relates to our constitutional rights guaranteed under the 1st and 2nd Amendments."
The spokesperson said that since their account was suspended, they have become aware that many other social media users have been similarly silenced and de-platformed.
"While we were encouraged by the reinstatement of our account, we were similarly disappointed by the number of other users reacting to our statement on X that commented that they have had very similar experiences with their accounts being de-platformed without warning," said the representative. "While we obviously do not know the details of those instances, we encourage Meta to continue working towards a more inclusive platform which allows the freedom for respectful dialogue from all viewpoints, which is a hallmark of American society."
Founded in Norwich, Connecticut, in 1852, Smith & Wesson is one of the most recognized gun brands in America and reported $535.8 million in sales in the 2024 fiscal year.
Elon Musk helped found OpenAI, but he has frequently criticized it in recent years.
Musk filed a lawsuit against OpenAI in August and just amended it to include Microsoft.
Here's a history of Musk and Altman's working relationship.
Elon Musk and Sam Altman lead rival AI firms and now take public jabs at each other — but it wasn't always like this.
Years ago, the two cofounded OpenAI, which Altman now leads. Musk departed OpenAI, which created ChatGPT, in 2018, and recently announced his own AI venture, xAI.
There is enough bad blood that Musk sued OpenAI and Altman, accusing them in the suit of betraying the firm's founding principles, before dropping the lawsuit. The billionaire then filed a new one a few months later, claiming he was "deceived" into confounding the company. In November, he amended it to include Microsoft as a defendant, and his lawyers accused the two companies of engaging in monopolistic behavior. Microsoft is an investor in OpenAI.
Two weeks later, Musk's lawyers filed a motion requesting a judge to bring an injunction against OpenAI that would block it from dropping its nonprofit status. In the filing, Musk accused OpenAI and Microsoft of exploiting his donations to create a for-profit monopoly.
Here's a look at Musk and Altman's complicated relationship over the years:
Musk and Altman cofounded OpenAI, the creator of ChatGPT, in 2015, alongside other Silicon Valley figures, including Peter Thiel, LinkedIn cofounder Reid Hoffman, and Y Combinator cofounder Jessica Livingston.
The group aimed to create a nonprofit focused on developing artificial intelligence "in the way that is most likely to benefit humanity as a whole," according to a statement on OpenAI's website from December 11, 2015.
At the time, Musk said that AI was the "biggest existential threat" to humanity.
"It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly," a statement announcing the founding of OpenAI reads.
Musk stepped down from OpenAI's board of directors in 2018.
With his departure, Musk also backed out of a commitment to provide additional funding to OpenAI, a person involved in the matter told The New Yorker.
"It was very tough," Altman told the magazine of the situation. "I had to reorient a lot of my life and time to make sure we had enough funding."
It was reported that Sam Altman and other OpenAI cofounders had rejected Musk's proposal to run the company in 2018.
Semafor reported in 2023 that Musk wanted to run the company on his own in an attempt to beat Google. But when his offer to run the company was rejected, he pulled his funding and left OpenAI's board, the news outlet said.
In 2019, Musk shared some insight on his decision to leave, saying one of the reasons was that he "didn't agree" with where OpenAI was headed.
"I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX," he tweeted. "Also, Tesla was competing for some of same people as OpenAI & I didn't agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms."
Musk has taken shots at OpenAI on several occasions since leaving.
Two years after his departure, Musk said, "OpenAI should be more open" in response to an MIT Technology Review article reporting that there was a culture of secrecy there, despite OpenAI frequently proclaiming a commitment to transparency.
In December 2022, days after OpenAI released ChatGPT, Musk said the company had prior access to the database of Twitter — now owned by Musk — to train the AI chatbot and that he was putting that on hold.
"Need to understand more about governance structure & revenue plans going forward. OpenAI was started as open-source & non-profit. Neither are still true," he said.
Musk was reportedly furious about ChatGPT's success, Semafor reported in 2023.
In February 2023, Musk doubled down, saying OpenAI as it exists today is "not what I intended at all."
"OpenAI was created as an open source (which is why I named it "Open" AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all," he said in a tweet.
Musk repeated this assertion a month later.
"I'm still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn't everyone do it?" he tweeted.
Musk was one of more than 1,000 people who signed an open letter calling for a six-month pause on training advanced AI systems.
The March 2023 letter, which also received signatures from several AI experts, cited concerns about AI's potential risks to humanity.
"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.
But while he was publicly calling for the pause, Musk was quietly building his own AI competitor, xAI, The New Yorker reported in 2023. He launched the company in March 2023.
Altman has addressed some of Musk's gripes about OpenAI.
"To say a positive thing about Elon, I think he really does care about a good future with AGI," Altman said last year on an episode of the "On With Kara Swisher" podcast, referring to artificial general intelligence.
"I mean, he's a jerk, whatever else you want to say about him — he has a style that is not a style that I'd want to have for myself," Altman told Swisher. "But I think he does really care, and he is feeling very stressed about what the future's going to look like for humanity."
In response to Musk's claim that OpenAI has turned into "a closed source, maximum-profit company effectively controlled by Microsoft," Altman said on the podcast, "Most of that is not true, and I think Elon knows that."
Altman has also referred to Musk as one of his heroes.
In a March 2023 episode of Lex Fridman's podcast, Altman also said, "Elon is obviously attacking us some on Twitter right now on a few different vectors."
In a May 2023 talk at University College London, Altman was asked what he's learned from various mentors, Fortune reported. He answered by speaking about Musk.
"Certainly learning from Elon about what is just, like, possible to do and that you don't need to accept that, like, hard R&D and hard technology is not something you ignore, that's been super valuable," he said.
Musk has since briefly unfollowed Altman on Twitter before following him again; separately, Altman later poked fun at Musk's claim to be a "free speech absolutist."
Twitter took aim at posts linking to rival Substack in 2023, forbidding users from retweeting or replying to tweets containing such links, before reversing course. In response to a tweet about the situation, Altman tweeted, "Free speech absolutism on STEROIDS."
Altman joked that he'd watch Musk and Mark Zuckerberg's rumored cage fight.
"I would go watch if he and Zuck actually did that," he said at the Bloomberg Technology Summit in June 2023, though he said he doesn't think he would ever challenge Musk in a physical fight.
Altman also repeated several of his previous remarks about Musk's position on AI.
"He really cares about AI safety a lot," Altman said at Bloomberg's summit. "We have differences of opinion on some parts, but we both care about that and he wants to make sure we, the world, have the maximal chance at a good outcome."
Separately, Altman told The New Yorker in August 2023 that Musk has a my-way-or-the highway approach to issues more broadly.
"Elon desperately wants the world to be saved. But only if he can be the one to save it," Altman said.
Musk first sued Altman and OpenAI in March 2024.
He first sued OpenAI, Altman, and cofounder Greg Brockman in March, alleging the company's direction in recent years has violated its founding principles.
His lawyers alleged OpenAI "has been transformed into a closed-source de facto subsidiary of the largest technology company in the world" and is "refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity."
The lawsuit alleges that OpenAI executives played on Musk's concerns about the existential risks of AI and "assiduously manipulated" him into cofounding the company as a nonprofit. The intent of the company was to focus on building AI safely in an open approach to benefit humanity, the lawsuit says.
The company has since decided to take a for-profit approach.
OpenAI responded to the lawsuit by stating that "Elon's prior emails continue to speak for themselves."
The emails, which were published by OpenAI in March, show correspondence between Musk and OpenAI executives that indicated he supported a pivot to a for-profit model and was open to merging the AI startup with Tesla.
Musk expanded his beef with OpenAI to include Microsoft, accusing the two of constituting a monopoly
The billionaire called OpenAI's partnership with Microsoft a "de facto merger" and accused the two of anti-competitive practices, such as engaging in "lavish compensation." Musk's lawyers said the two companies "possess a nearly 70% share of the generative AI market."
"OpenAI has attempted to starve competitors of AI talent by aggressively recruiting employees with offers of lavish compensation, and is on track to spend $1.5 billion on personnel for just 1,500 employees," lawyers for Musk said in the complaint.
Two weeks later, Musk filed a motion asking a judge to prevent OpenAI from dropping its nonprofit status.
Musk filed a complaint to Judge Yvonne GonzalezRogers of the US District Court for the Northern District of California, arguing that OpenAI and Microsoft exploited his donations to OpenAI as a nonprofit to build a monopoly "specifically targeting xAI." In the filing, Musk's lawyers said OpenAI engaged in anticompetitive behaviors and wrongfully shared information with Microsoft.
If granted by the judge, the injunction could cause issues with OpenAI's partnership with Microsoft and prevent it from becoming a for-profit company.
As Musk's influence on US policy grows, his feud with Altman hangs in the balance.
As President-elect Donald Trump's self-proclaimed "First Buddy," Musk's power and influence on the US economy could increase even further over the next four years. In addition to being a right-hand-man to Trump, he'll lead the new Department of Government Efficiency with biotech billionaire Vivek Ramaswamy.
Musk hasn't been quiet about his disdain for Altman post-election. He dubbed the OpenAI cofounder "Swindly Sam" in an X post on November 15. The Wall Street Journal reported that Musk "despises" Altman, according to people familiar.
Bluesky said it "quadrupled" its moderation team in a post on Friday.
The app is working to address "handle-squatting" and impersonation accounts.
It's also exploring how to enhance account verification based on user feedback.
Bluesky has expanded its moderation team as curious social media users, many of whom are seeking an alternative to Elon Musk's X, flock to the app.
The official account for Bluesky's Trust & Safety team published a thread on Friday that shared details about its impersonation policy.
The company said the policy has been updated to be more "aggressive," adding that "impersonation and handle-squatting accounts will be removed."
"We have also quadrupled the size of our moderation team, in part to action impersonation reports more quickly. We still have a large backlog of moderation reports due to the influx of new users as we shared previously, though we are making progress," a post read.
The company said that satire, fan, and parody accounts are allowed on Bluesky, but they must label themselves as such in the display name and bio for transparency. Identity churning — or changing an account's identity to mislead users — is prohibited on the app.
"If you set up an impersonation account just to gain followers and switch to a different identity that is no longer impersonation to keep that account, your account will be removed," a post read.
Bluesky also responded to users who have asked for more concrete verification methods.
"We also hear your feedback: users want more ways to verify their identity beyond domain verification," a post read. "We're exploring additional options to enhance account verification, and we hope to share more shortly."
Representatives for Bluesky did not respond to a request for comment from Business Insider.
X's approach to moderation faced criticism when Musk took control in 2022. After his arrival, Musk laid off content moderators and staffers on the moderation team. However, X's head of business operations told Bloomberg in January that it planned to hire 100 employees tasked with content moderation and build a content moderation centerin Austin.
While Bluesky is still chasing X's success, it could challenge Threads, Meta CEO Mark Zuckerberg's own Twitter knockoff.
Bluesky announced in October that it had over 13 million users. One month later, Bluesky's COO told Business Insider that it had "blown past" its user growth projects and had surpassed 21 million users. The COO said the company had to acquire more servers to keep operations running smoothly.