❌

Normal view

There are new articles available, click to refresh the page.
Today β€” 1 July 2025News

Trump suggests DOGE should 'take a good, hard look' at government contracts with Musk's companies

1 July 2025 at 01:21
Elon Musk and President Trump
Trump escalated the feud with Elon Musk.

Andrew Harnik/Getty Images

  • Trump is suggesting that DOGE take a look at Elon Musk's business contracts with the government.
  • He said the US could "save a fortune" if there were "no more rocket launches, satellites."
  • His comments came after Musk slammed his "big, beautiful bill."

President Donald Trump suggested that the Department of Government Efficiency β€” an initiative Elon Musk spearheaded β€” look into slashing government contracts with Musk's companies.

"No more Rocket launches, Satellites, or Electric Car Production, and our Country would save a FORTUNE. Perhaps we should have DOGE take a good, hard, look at this? BIG MONEY TO BE SAVED!!!" Trump wrote on Truth Social.

Trump also suggested that Musk might have to "close up shop" and return to South Africa, where he was born. Musk has been a US citizen since 2002.

The Department of Government Efficiency was, until April, headed by Musk.

Trump's comments come after Musk reignited their feud by criticizing the president's proposed "Big, Beautiful Bill," which proposes cuts to Medicaid, removal of taxes on tips and overtime, and axes tax credits for electric vehicles.

Musk has not responded to Trump's latest jab on slashing his government contracts. He has, in the meantime, vociferously opposed the spending priorities outlined in the bill, and threatened to start a new political party if it passes in the Senate.

On Saturday, Musk wrote on X that the president's proposed bill "will destroy millions of jobs in America and cause immense strategic harm to our country."

"Utterly insane and destructive. It gives handouts to industries of the past while severely damaging industries of the future," he said.

This is at least the second time Trump has threatened to end Musk's contracts since their very public falling-out in June.

"The easiest way to save money in our Budget, Billions and Billions of Dollars, is to terminate Elon's Governmental Subsidies and Contracts. I was always surprised that Biden didn't do it!" Trump wrote in June, at the start of their feud.

Responding to the jab, Musk wrote on X that SpaceX "will begin decommissioning its Dragon spacecraft immediately." He walked back the comment a few hours later.

SpaceX works closely with NASA, and the Dragon spaceships are used to ferry NASA astronauts and supplies to and from the International Space Station.

Representatives for Trump and Musk did not respond to requests for comment from Business Insider.

Read the original article on Business Insider

America's small business owners are being swamped by scammers

1 July 2025 at 01:09
Business overflowed with scam websties

Getty Images; Ava Horton/BI

Sometime last year, Ian Lamont's inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn't opened any new positions, but when he logged onto LinkedIn, he found one for a "Data Entry Clerk" linked to his business's name and logo.

Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company's "manager." The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company's site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied.

Generative AI's potential to bolster business is staggering. According to one 2023 estimate from McKinsey, in the coming years it's expected to add more value to the global economy annually than the entire GDP of the United Kingdom. At the same time, GenAI's ability to almost instantaneously produce authentic-seeming content at mass scale has created the equally staggering potential to harm businesses.

Since ChatGPT's debut in 2022, online businesses have had to navigate a rapidly expanding deepfake economy, where it's increasingly difficult to discern whether any text, call, or email is real or a scam. In the past year alone, GenAI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. In a Nationwide insurance survey of small business owners last fall, a quarter reported having faced at least one AI scam in the past year. Microsoft says it now shuts down nearly 1.6 million bot-based signup attempts every hour. RenΓ©e DiResta, who researches online adversarial abuse at Georgetown University, tells me she calls the GenAI boom the "industrial revolution for scams" β€” as it automates frauds, lowers barriers to entry, reduces costs, and increases access to targets.

The consequences of falling for an AI-manipulated scam can be devastating. Last year, a finance clerk at the engineering firm Arup joined a video call with whom he believed were his colleagues. It turned out that each of the attendees was a deepfake recreation of a real coworker, including the organization's chief financial officer. The fraudsters asked the clerk to approve overseas transfers amounting to more than $25 million, and assuming the request came through the CFO, he green-lit the transaction.

Business Insider spoke with professionals in several industries β€” including recruitment, graphic design, publishing, and healthcare β€” who are scrambling to keep themselves and their customers safe against AI's ever-evolving threats. Many feel like they're playing an endless game of whack-a-mole, and the moles are only multiplying and getting more cunning.


Last year, fraudsters used AI to build a French-language replica of the online Japanese knives store Oishya, and sent automated scam offers to the company's 10,000-plus followers on Instagram. The fake company told customers of the real company they had won a free knife and that all they had to do was pay a small shipping fee to claim it β€” and nearly 100 people fell for it. Kamila Hankiewicz, who has run Oishya for nine years, learned about the scam only after several victims contacted her asking how long they needed to wait for the parcel to arrive.

It was a rude awakening for Hankiewicz. She's since ramped up the company's cybersecurity and now runs campaigns to teach customers how to spot fake communications. Though many of her customers were upset about getting defrauded, Hankiewicz helped them file reports with their financial institutions for refunds. Rattling as the experience was, "the incident actually strengthened our relationship with many customers who appreciated our proactive approach," she says.

Her alarm bells really went off once the interviewer asked her to share her driver's license.

Rob Duncan, the VP of strategy at the cybersecurity firm Netcraft, isn't surprised at the surge in personalized phishing attacks against small businesses like Oishya. GenAI tools now allow even a novice lone wolf with little technical know-how to clone a brand's image and write flawless, convincing scam messages within minutes, he says. With cheap tools, "attackers can more easily spoof employees, fool customers, or impersonate partners across multiple channels," Duncan says.

Though mainstream AI tools like ChatGPT have precautions in place when you ask them to infringe copyright, there are now plenty of free or inexpensive online services that allow users to replicate a business's website with simple text prompts. Using a tool called Llama Press, I was able to produce a near-exact clone of Hankiewicz's store, and personalize it from a few words of instructions. (Kody Kendall, Llama Press's founder, says cloning a store like Oshiya's doesn't trigger a safety block because there can be legitimate reasons to do so, like when a business owner is trying to migrate their website to a new hosting platform. He adds that Llama Press relies on Anthropic's and OpenAI's built-in safety checks to weed out bad-faith requests.)

Text is just one front of the war businesses are fighting against malicious uses of AI. With the latest tools, it takes a solo adversary β€” again with no technical expertise β€” as little as an hour to create a convincing fake job candidate to attend a video interview.

Tatiana Becker, a tech recruiter based in New York, tells me deepfake job candidates have become an "epidemic." Over the past couple years, she has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews. At this point, she's able to discern some of their telltale signs of fakery, including a glitchy video quality and the candidate's refusal to switch up any element of their appearance during the call, such as taking off their headphones. Now, at the start of every interview she asks for the candidates' ID and poses more open-ended questions, like what they like to do in their free time, to suss out if they're a human. Ironically, she's made herself more robotic at the outset of interviews to sniff out the robots.

Nicole Yelland, a PR executive, says she found herself on the opposite end of deepfakery earlier this year. A scammer impersonating a startup recruiter approached her over email saying he was looking for a head of comms, with an offer package that included generous pay and benefits. The purported person even shared with her an exhaustive slide deck, decorated with AI-generated visuals, outlining the role's responsibilities and benefits. Enticed, she scheduled an interview.

During the video meeting, however, the "hiring manager" refused to speak, and instead asked Yelland to type her responses to the written questions in the Microsoft Teams chat section. Her alarm bells really went off once the interviewer started asking her to share a series of private documents, including her driver's license.

Yelland now runs a background check with tools like Spokeo before engaging with any stranger online. "It's annoying and takes more time, but engaging with a spammer is more annoying and time-consuming; so this is where we are," she says.

While videoconferencing platforms like Teams and Zoom are getting better at detecting AI-generated accounts, some experts say the detection itself risks creating an vicious cycle. The data these platforms collect on what's fake is ultimately used to train more sophisticated GenAI models, which will help them get better at escaping fakery detectors and fuel "an arms race defenders cannot win," says Jasson Casey, the CEO of Beyond Identity, a cybersecurity firm that specializes in identity theft. Casey and his company believe the focus should instead be on authenticating a person's identity. Beyond Identity sells tools that can be plugged into Zoom that verify meeting participants through their device's biometrics and location data. If it detects a discrepancy, the tools label the participants' video feed as "unverified." Tramèr Florian, a computer science professor at ETH Zurich, agrees that authenticating identity will likely become more essential to ensure that you're always talking to a legitimate colleague.

It's not just fake job candidates entrepreneurs now have to contend with, it's always fake versions of themselves. In late 2024, scammers ran ads on Facebook for a video featuring Jonathan Shaw, the deputy director of the Baker Heart and Diabetes Institute in Melbourne. Although the person in it looked and sounded exactly like Dr. Shaw, the voice had been deepfaked and edited to say that metformin β€” a first-line treatment for type 2 diabetes β€” is "dangerous," and patients should instead switch to an unproven dietary supplement. The fake ad was accompanied by a fake written news interview with Shaw.

Several of his clinic's patients, believing the video was genuine, reached out asking how to get a hold of the supplement. "One of my longstanding patients asked me how come I continued to prescribe metformin to him, when 'I' had said on the video that it was a poor drug," Shaw tells me. Eventually he was able to get Facebook to take down the video.

Then there's the equally vexing and annoying issue of AI slop β€” an inundation of low-quality, mass-produced images and text that is flooding the internet and making it ever-more difficult for the average person to tell what's real or fake. In her research, DiResta found instances where social platforms' recommendation engines have promoted malicious slop β€” where scammers would put up images of items like nonexistent rental properties, appliances, and more that users were frequently falling for it and giving away their payment details.

On Pinterest, AI-generated "inspo" posts have plagued people's mood boards β€” so much so that Philadelphia-based Cake Life Shop now often receives orders from customers asking them to recreate what are actually AI-generated cakes. In one shared with Business Insider, the cake resembles a moss-filled rainforest, and features a functional waterfall. Thankfully for cofounder Nima Etemadi, most customers are "receptive to hearing about what is possible with real cake after we burst their AI bubble," he says.

Similarly, AI-generated books have swarmed Amazon and are now hurting publisher sales.

Pauline Frommer, the president of the travel guide publisher Frommer Media, says that AI-generated guidebooks have managed to reach the top of lists with the help of fake reviews. An AI publisher buys a few Prime memberships, sets the guidebook's ebook price to zero, and then leaves seemingly "verified reviews" by downloading its copies for free. These practices, she says, "will make it virtually impossible for a new, legitimate brand of guidebook to enter the business right now." Ian Lamont says he received an AI-generated guidebook as a gift last year: a text-only guide to Taiwan, with no pictures or maps.


While the FTC now considers it illegal to publish fake, AI-generated product reviews, official policies haven't yet caught up with AI-generated content itself. Platforms like Pinterest and Google have started to watermark and label AI-generated posts, but since it's not error-free yet, some worry these measures may do more harm than good. DiResta fears that a potential unintended consequence of ubiquitous AI labels would be people experiencing "label fatigue," where they blindly assume that unlabeled content is therefore always "real." "It's a potentially dangerous assumption if a sophisticated manipulator, like a state actor's intelligence service, manages to get disinformation content past a labeler," she says.

For now, small business owners should stay vigilant, says Robin Pugh, the executive director of Intelligence for Good, a non-profit that helps victims of internet-enabled crimes. They should always validate they're dealing with an actual human and that the money they're sending is actually going where they intend it to go.

Etemadi of Cake Life Shop recognizes that for as much as GenAI can help his business become more efficient, scam artists will ultimately use the same tools to become just as efficient. "Doing business online gets more necessary and high risk every year," he says. "AI is just part of that."


Shubham Agarwal is a freelance technology journalist from Ahmedabad, India, whose work has appeared in Wired, The Verge, Fast Company, and more.

Read the original article on Business Insider

Tech giants play musical chairs with foundation models

1 July 2025 at 01:15

There are five consumer-tech giants β€” but only three leading AI foundation models.

Why it matters: When the deal-making music stops, someone's going to be left out.


Driving the news: Apple is talking with both Anthropic and OpenAI about using their foundation models to power Siri, after in-house efforts to upgrade Apple's voice assistant have faltered, Bloomberg reported Monday.

  • Once cutting edge, Siri is now a glaring anachronism in a world enthralled by the verbal and vocal agility of LLMs β€” and an irritating reminder to everyone at Apple of how far they've lagged behind in the voice-assistant competition.

The big picture: Every big player in tech is working on their own foundation models β€” the biggest and most ambitious large language models that fuel ChatGPT and all the other services at the heart of the generative AI revolution.

  • OpenAI, Anthropic and Google seized the high ground early and have stayed ahead of the pack, both in scale, innovative advances and subtle refinements.
  • Important runners-up range from Elon Musk's X.ai to France's Mistral and China's DeepSeek.

Most of tech's five trillion-dollar giants already have a match in the foundation-model game, but there's constant movement, and the music is still playing.

Google is the only one of the giants that has built its own top-tier model.

  • Its researchers made the breakthroughs that power LLMs today, but it was slow to share advances with the public, opening the door for OpenAI to stun the world with ChatGPT in November 2022.
  • Since then Google has poured enormous resources into Gemini β€” once known as Bard β€” and begun rebuilding most of its products, including its dominant search engine, around the model.

Microsoft tied its future to OpenAI early in the game with a gigantic investment and a commitment to deploying OpenAI models to the vast installed base of Microsoft users.

  • But the alliance has frayed, and the two companies are locked in high-stakes negotiations for an amicable breakup.
  • Microsoft also has its own in-house foundation model project, but that has yet to surface.

Meta got a later start and bet heavily on an open-source strategy with its Llama model family.

  • But disappointment in the progress made by the most recent Llama flagship model has forced a rethink, per the New York Times.
  • The Times reported last week that Meta execs had discussed "de-investing" in Llama, though the company denies that.
  • Meanwhile, CEO Mark Zuckerberg has poached a number of OpenAI researchers and added ScaleAI founder Alexandr Wang and former Github CEO Nat Friedman to his roster.
  • Zuckerberg announced Monday these new hires would lead a unit called Meta Superintelligence Labs that will bring together all of Meta's foundation model work.

Amazon has invested in its own families of models to offer its cloud customers, chiefly Nova and Titan.

  • But for its effort to improve the popular but aging Alexa voice assistant, Amazon found that Nova alone couldn't handle the job and pulled Anthropic's Claude in as well.
  • Amazon has also made multiple investments in Anthropic.

All this means that Apple's choices are limited.

  • Apple wouldn't turn to Google β€” not only because both companies are defending giant antitrust lawsuits that might make a deal perilous, but also for historical-cultural reasons.
  • Apple never forgave Google for building Android, though it still takes Google's billions for making Google search the iPhone default.

That leaves OpenAI and Anthropic β€” both of which Apple has explored partnering with, per Bloomberg.

  • Siri already lets users route questions to OpenAI's ChatGPT.
  • But a team tasked with evaluating Apple's external options found that Anthropic's Claude was the best candidate for the broader Siri upgrade, Bloomberg reported.

Yes, but: Apple could still decide to redouble its internal efforts instead.

  • The company has a long history of avoiding shipping half-baked products and letting projects take as long as they need to become what Apple thinks of as "great."
  • But the pace of AI change is putting that strategy to its toughest test yet.

Zoom out: Driving and shaping all these firms' deals and choices is the war for AI talent, with both giants and startups desperately throwing money at a relatively small number of researchers.

  • Each company hopes that its team will be able to deliver on the astronomical promises executives have made about AI's transformative benefits and ultimate profits.
  • But not everyone can win β€” and each of the tech industry's previous waves has had only one or two victors.

Andy Jassy says AI will eliminate some Amazon jobs — but create more in at least 2 areas

1 July 2025 at 00:19
Amazon CEO Andy Jassy
Amazon's CEO, Andy Jassy, says AI technologies will create jobs in AI and robotics.

REUTERS/Brendan McDermid

  • Amazon's CEO, Andy Jassy, says AI will transform jobs at the company.
  • The tech will create new jobs in robotics and AI, despite automating some existing roles, he said.
  • Amazon has 500 open robotics roles on LinkedIn.

AI isn't all doom and gloom for jobs, said Amazon's Andy Jassy.

In an interview with CNBC published on Monday, the Amazon CEO deemed AI "the most transformative technology in our lifetime." He said that it would change things not only for Amazon customers but also for its employees.

Jassy said that AI technologies would create jobs in at least two areas of the company.

"With every technical transformation, there will be fewer people doing some of the jobs that the technology actually starts to automate," he said. "Are there going to be other jobs? We're going to hire more people in AI and more people in robotics, and there are going to be other jobs that the technology wants you to go higher that we'll hire over time too."

Jassy said that AI agents, which do tasks like coding, research, analytics, and spreadsheet work, would also change the nature of every employee's job.

"They won't have to do as much rote work," he said. "Every single person gets to start every task at a more advanced starting spot."

On LinkedIn, Amazon has added at least 500 open roles worldwide with the keyword "robotics" in the job title in the past month. Roles span internships to senior applied scientist positions.

The Amazon robotics senior applied scientist job description includes tasks like "developing machine-learning capabilities and infrastructure for robotic perception and motion" and "building visualization tools for analyzing and debugging robot behavior."

Jassy's comments came in response to a question about his June 17 memo, which outlined how AI would change the company's workforce.

"It's hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company," he wrote.

Some Amazon employees were not happy with Jassy's message. In internal Slack channels, some called for leadership to share in the fallout, while others saw it as a layoff warning, Business Insider reported.

"There is nothing more motivating on a Tuesday than reading that your job will be replaced by AI in a few years," one person wrote in Slack.

Amazon employs about 1.5 million workers, according to its website, and has cut almost 28,000 jobs since the start of 2022, per Layoffs.fyi.

From Jassy's memo and Monday's interview, it is unclear which or how many Amazon employees would be affected by AI-driven job changes.

Other tech CEOs have raised the alarm on AI-related job cuts, especially for white-collar and entry-level roles.

In April, Micha Kaufman, the CEO and founder of the freelance-job site Fiverr, wrote in an email to employees that: "It does not matter if you are a programmer, designer, project manager, data scientist, lawyer, customer support rep, salesperson, or a finance person β€” AI is coming for you."

In late May, Anthropic's CEO, Dario Amodei, suggested AI could wipe out half of all entry-level white-collar jobs.

"We, as the producers of this technology, have a duty and an obligation to be honest about what is coming," Amodei told Axios in an interview. "I don't think this is on people's radar."

Read the original article on Business Insider

Microsoft says its new health AI beat doctors in accurate diagnoses by a mile

30 June 2025 at 23:45
Microsoft
Β Microsoft said its medical AI diagnosed cases four times as accurately as human doctors.

Matthias Balk/picture alliance via Getty Images

  • Microsoft said its medical AI diagnosed cases four times as accurately as human doctors.
  • The AI system also solved cases "more cost-effectively" than its human counterparts, Microsoft said.
  • The study comes as AI's growing role in healthcare raises questions about its place in medicine.

Microsoft said its medical AI system diagnosed cases more accurately than human doctors by a wide margin.

In a blog post published on Monday, the tech giant said its AI system, the Microsoft AI Diagnostic Orchestrator, diagnosed cases four times as accurately as a group of experienced physicians in a test.

Microsoft's study comes as AI tools rapidly make their way into hospitals and clinics, raising questions about how much of medicine can or should be automated and what role doctors will play as diagnostic AI systems get more capable.

The experiment involved 304 case studies sourced from the New England Journal of Medicine. Both the AI and physicians had to solve these cases step by step, just like they would in a real clinic: ordering tests, asking questions, and narrowing down possibilities.

The AI system was paired with large language models from tech companies like OpenAI, Meta, Anthropic, and Google. When coupled with OpenAI's o3, the AI diagnostic system correctly solved 85.5% of the cases, Microsoft said.

By contrast, 21 practicing physicians from the US and UK β€” each with five to 20 years of experience β€” averaged 20% accuracy across the completed cases, the company added. In the study, the doctors did not have access to resources they might typically tap for diagnostics, including coworkers, books, and AI.

The AI system also solved cases "more cost-effectively" than its human counterparts, Microsoft said.

"Our findings also suggest that AI reduce unnecessary healthcare costs. US health spending is nearing 20% of US GDP, with up to 25% of that estimated to be wasted," it added.

"We're taking a big step towards medical superintelligence," said Mustafa Suleyman, the CEO of Microsoft's AI division, in a post on X.

He added that the cases used in the study are "some of the toughest and most diagnostically complex" a physician can face.

Suleyman previously led AI efforts at Google.

Microsoft did not respond to a request for comment from Business Insider.

Will AI replace doctors?

Microsoft said in the blog post that AI "represents a complement to doctors and other health professionals."

"While this technology is advancing rapidly, their clinical roles are much broader than simply making a diagnosis. They need to navigate ambiguity and build trust with patients and their families in a way that AI isn't set up to do," Microsoft said.

"Clinical roles will, we believe, evolve with AI," it added.

Tech leaders like Microsoft cofounder Bill Gates have said that AI could help solve the long-standing shortage of doctors.

"AI will come in and provide medical IQ, and there won't be a shortage," he said on an episode of the "People by WTF" podcast published in April.

But doctors have told BI that AI can't and shouldn't replace clinicians just yet.

AI can't replicate physicians' presence, empathy, and nuanced judgment in uncertain or complex conditions, said Dr. Shravan Verma, the CEO of a Singapore-based health tech startup.

Chatbots and AI tools can handle the first mile of care, but they must escalate to qualified professionals when needed, he told BI last month.

Do you have a story to share about AI in healthcare? Contact this reporter at [email protected].

Read the original article on Business Insider

Ray Dalio warns that politicians can't solve the 'debt bomb problem' without serious voter blowback

30 June 2025 at 23:30
Ray Dalio speaks onstage during the 2025 TIME100 Summit at Jazz at Lincoln Center in New York City on April 23, 2025.
Ray Dalio said on Monday that the US "debt bomb problem" can only be solved with a "mix of tax revenue increases and spending decreases that are determined in a bipartisan way."

Jemal Countess via Getty Images

  • Ray Dalio said solving America's debt problem will require a bipartisan solution.
  • But Dalio said Republicans and Democrats can't do it "because politics have become so absolutist."
  • "To me, that's a tragedy," he added.

Ray Dalio, the founder and former CEO of hedge fund Bridgewater Associates, said on Monday that Republicans and Democrats don't have the political will to reduce the country's national debt.

"There is no way that the deficit/debt bomb problem can be sustainably dealt with unless there is a mix of tax revenue increases and spending decreases that are determined in a bipartisan way," Dalio wrote in an X post on June 30.

Dalio said in his post that both Republicans and Democrats know they have to cut the deficit by raising taxes and cutting spending. This "would lead to a supply/demand balance improvement for US debt which in turn would lower interest rates," he added.

Dalio said lower interest rates would "reduce the budget deficit" and "help the markets and the economy."

"But because politics have become so absolutist, they feel they can't go down this obviously best path because both their constituents and their parties will throw them out of office if they explored this more balanced approach," Dalio wrote on X.

"To me, that's a tragedy," he added.

A representative for Dalio did not respond to a request for comment from Business Insider.

This isn't the first time Dalio has warned about the ramifications of leaving debt levels unattended. In February, Dalio told attendees at the World Governments Summit in Dubai that the US could suffer the financial equivalent of a "heart attack" if debt continued to soar.

Then, in March, Dalio said he had met House Budget Chair Jodey Arrington and other House Republicans to discuss the national debt. Dalio said after the meeting that there was a "pretty broad recognition" among attendees that the deficit needed to be reduced to around 3% of US GDP.

The federal government spent $6.75 trillion in the 2024 fiscal year but only collected $4.92 trillion in revenue. The resulting deficit of $1.83 trillion was $138 billion higher than the previous fiscal year's.

Dalio isn't the only business leader who has sounded the alarm on the US debt problem.

Elon Musk, the CEO of Tesla and SpaceX, has sharply criticized President Donald Trump's "One Big Beautiful Bill" for worsening the country's debt situation. Musk was a prominent financial backer of Trump and used to lead the administration's cost-cutting outfit, the White House DOGE Office.

But Musk broke with Trump just days after he announced his departure from DOGE on May 28. Musk called Trump's bill a "MOUNTAIN of DISGUSTING PORK" on June 5 before walking back his attacks the following week.

Musk, however, resumed his attacks on the bill over the weekend. The bill is pending a vote in the Senate, and GOP lawmakers hope to send it to Trump's desk by July 4.

"It is obvious with the insane spending of this bill, which increases the debt ceiling by a record FIVE TRILLION DOLLARS that we live in a one-party country β€” the PORKY PIG PARTY!!" Musk wrote in an X post on Monday.

Read the original article on Business Insider

Yesterday β€” 30 June 2025News

Trump-Musk feud reignites after Tesla CEO calls for new political party

30 June 2025 at 22:34

Elon Musk again blasted President Trump's signature spending bill Monday as the Senate worked through a marathon amendment session to send the measure to the Oval Office by July 4.

The latest: Hours after Musk threatened to form a new political party if the "big, beautiful bill" passed, Trump claimed early Tuesday the Tesla CEO "may get more subsidy than any human being in history" and suggested he may have DOGE, which the Tesla CEO once spearheaded, take a "good, hard, look" at the businesses the billionaire has government contracts with.


Screenshot: President Trump/Truth Social

Why it matters: The relationship between Trump and Musk β€” once a ubiquitous figure in the White House and at DOGE β€” imploded over the president's "big, beautiful bill."

  • "It is obvious with the insane spending of this bill, which increases the debt ceiling by a record FIVE TRILLION DOLLARS that we live in a one-party country – the PORKY PIG PARTY!!" Musk wrote on his social media site X.
  • "Time for a new political party that actually cares about the people," the world's richest man added.

What they're saying: In a separate post, Musk doubled down on his rhetoric, saying every member of Congress who votes to pass the bill should "hang their head in shame."

  • "[T]hey will lose their primary next year if it is the last thing I do on this Earth," Musk said.
  • In a post later Monday, Musk said, "If this insane spending bill passes, the America Party will be formed the next day."
  • He added that the country needs "an alternative to the Democrat-Republican uniparty so that the people actually have a VOICE."

Of note: The White House initially responded to Musk's comments by pointing to Trump's remarks to Fox News over the weekend saying that the world's richest person was upset the bill would end subsidies for EVs.

  • "I think Elon is a wonderful guy, and I know he's going to do well, always. He's a smart guy," Trump said in the interview.

Friction point: Musk, who slashed and burned through federal agencies atop the Department of Government Efficiency, has repeatedly decried the size of Trump's spending package, now estimated to add more than $3 trillion to the national debt.

  • The rupture spilled over into an open feud earlier this month, with Musk lashing out at Trump in a series of X posts, which he later walked back.
  • Musk, who claimed credit for Trump's reelection, had floated the idea of founding a new political party in an X poll of his readers June 5.

More from Axios:

Editor's note: This story has been updated with additional reporting, including details of President Trump's Tuesday morning post to Truth Social.

❌
❌