Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Meta job cuts have begun. Here's everything we know so far.

Meta CEO Mark Zuckerberg
Meta CEO Mark Zuckerberg.

Chris Unger/Zuffa LLC via Getty Images

  • Meta has begun to cut thousands of jobs to focus on AI investment and efficiency.
  • Mark Zuckerberg is targeting low performers, part of a broader industry move toward leaner operations.
  • Some employees told Business Insider they're anxious about the changes.

Meta has begun to cut thousands of jobs as the social media giant takes a tougher stance on underperforming employees and readies its finances for another year of heavy AI investment. Affected employees in Europe, Asia, and the US have started to be notified, per an internal post viewed by Business Insider.

The company has said it will eliminate roughly 5% of its workforce, which could mean almost 4,000 employees lose their jobs.

CEO Mark Zuckerberg told staff in January he would "raise the bar" and move quickly to remove low performers, according to an internal memo seen by BI.

This is part of a broader push by Big Tech companies to make themselves leaner after a hiring spree during the pandemic. Microsoft, Amazon, Salesforce, and others are collectively eliminating thousands of employees.

Zuckerberg has been at the forefront of this, announcing a "year of efficiency" in 2023 that has continued through last year and into 2025. Wall Street has rewarded Meta for this new focus, sending the company's shares soaring since the start of 2023 — a run that's added more than $1 trillion to Meta's market valuation. 

While Meta remained profitable through recent periods of heavy hiring and big spending, the company is now racing to keep up with rivals in the generative-AI race. This requires billions of dollars in infrastructure and related investment. That's likely putting pressure on Zuckerberg to seek cost savings elsewhere. 

A Meta spokesperson declined to comment.

Impact on some employees

Meta is offering impacted workers in the US a severance package that includes 16 weeks of pay and an additional two weeks for every year of service, according to two people familiar with the matter.

For some Meta employees, the efficiency drive is causing anxiety. These staffers asked not to be identified discussing sensitive topics.

"Mark is creating fear," a Meta employee told BI. "He's creating a culture where you have to be loyal to him or else."

Another employee said that working at Meta right now "feels like living in a George Orwell novel."

Even colleagues who have performed well "have been disappearing all year, and when you ask about it, you're just told, 'They're no longer with the company,'" this person said. "Self-censorship is rampant. At a company supposedly dedicated to connecting people, the human side of our work is disappearing, and everyone is acting more robotic."

Another Meta employee said reductions shouldn't be branded as performance-based cuts because this could damage people's reputations as they seek other opportunities.

"Now people have to go back out into the job market with a label that is incredibly unfair," this person added.

They expressed concern that good employees would be cut just to meet quotas and that this could have a negative impact on morale.

"What's the incentive to help a new hire ramp up if they're just going to stack rank us and probably do this all again next year?" this person added.

How Meta's latest job cuts may work

The job cuts are designed to target employees who receive "met some" or "did not meet" ratings, the bottom two categories in Meta's assessment system, in their performance reviews.

Internal guidance obtained by BI last month says managers must identify 12% to 15% of employees eligible for these ratings. Meta aims to reach 10% "nonregrettable attrition" by combining these cuts with previous departures. For example, if a team had 5% attrition in 2024, managers would need to identify another 7% to 10% of their employees for the bottom ratings to meet the target.

One Meta employee told BI that forcing managers to place team members into bottom categories for job cuts had spread anxiety through the management ranks as well as the rank and file.

On Friday, employees received a memo from Janelle Gale, Meta's vice president of human resources, detailing how the process should work. The memo, which was obtained by BI, said affected employees would be notified through their work and personal email addresses and lose access to company systems within an hour of being informed. They'll receive information on their severance packages in the same email, it added.

The notifications will be staggered across time zones, with employees in the Asia Pacific region being notified first, followed by those in Europe, the Middle East, and Africa, and then, finally, North and Latin America, the memo said.

Employees in European countries such as Germany, France, Italy, and the Netherlands will be exempt from this process because of local regulations and will instead follow local performance management processes, the memo said. Meta intends to backfill these roles, it added, but plans and timelines "may vary."

How Meta is reorganizing itself

Amid the cuts, the social media giant is also reorganizing some of its businesses and divisions.

The company is merging its Facebook and Messenger teams under Facebook's chief, Tom Alison, while Messenger's head, Loredana Crisan, is set to move to the generative-AI group, The Information said.

Meta's Reality Labs division, which has lost nearly $60 billion since 2020, is being more tightly integrated with Meta's main business, reversing some of Zuckerberg's 2021 reorganization. In an internal memo obtained by BI, Reality Labs' chief technology officer, Andrew Bosworth, said Reality Labs had "become a positive driver for Meta's overall brand."

Read the original article on Business Insider

The tech industry is in a frenzy over DeepSeek. Here's who could win and lose from China's AI progress.

A computer chip with the DeepSeek logo.
DeepSeek has sent Silicon Valley and the tech industry into a frenzy.

Tyler Le/Business Insider

  • DeepSeek, a Chinese open-source AI firm, is taking over the discussion in tech circles.
  • Tech stocks, especially Nvidia, plunged Monday.
  • Companies leading the AI boom could be in for a reset as DeepSeek upends the status quo.

DeepSeek, a Chinese company with AI models that compete with OpenAI's at a fraction of the cost, is generating almost as many takes as tokens.

Across Silicon Valley, executives, investors, and employees debated the implications of such efficient models. Some called into question the trillions of dollars being spent on AI infrastructure since DeepSeek says its models were trained for a relative pittance.

"This is insane!!!!" Aravind Srinivas, CEO of startup Perplexity AI, wrote in response to a post on X noting that DeepSeek models are cheaper and better than some of OpenAI's latest offerings.

The takes on DeepSeek's implications are coming fast and hot. Here are eight of the most common.

Take 1: Generative AI adoption will explode

"Jevons paradox strikes again!" Microsoft CEO Satya Nadella posted on X Monday morning. "As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of."

The idea that as tech improves, whether smarter, cheaper, or both, it will only bring in exponentially more demand is based on a 19th-century economic principle. In this case, the barrier to entry for companies looking to dip their toe into AI has been high. Cheaper tools could encourage more experimentation and further the technology faster.

"Similar to Llama, it lowers the barriers to adoption, enabling more businesses to accelerate AI use cases and move them into production." Umesh Padval, managing director at Thomvest Ventures told Business Insider.

That said, even if AI grows faster than ever, that doesn't necessarily mean the trillions of investments that have flooded the space will pay off.

Take 2: DeepSeek broke the prevailing wisdom about the cost of AI

"DeepSeek seems to have broken the assumption that you need a lot of capital to train cutting-edge models," Debarghya Das, an investor at Menlo Ventures told BI.

The price of DeepSeek's open-source model is competitive — 20 to 40 times cheaper to use than comparable models from OpenAI, according to Bernstein analysts.

The exact cost of building DeepSeek models is hotly debated. The research paper from DeepSeek explaining its V3 model lists a training cost of $5.6 million — a harrowingly low number for other providers of foundation models.

However, the same paper says that the "aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data." So the $5 million figure is only part of the equation.

The tech ecosystem is also reacting strongly to the implication that DeepSeek's state-of-the-art model architecture will be cheaper to run.

"This breakthrough slashes computational demands, enabling lower fees — and putting pressure on industry titans like Microsoft and Google to justify their premium pricing," Kenneth Lamont, principal at Morningstar, wrote in a note on Monday.

He went on to remind investors that with early-stage technology, assuming the winners are set is folly.

"Mega-trends rarely unfold as expected, and today's dominant players might not be tomorrow's winners," Lamont wrote.

Dmitry Shevelenko, the chief business officer at Perplexity, a big consumer of compute and existing models, concurred that Big Tech players would need to rethink their numbers.

"It certainly challenges the margin structure that maybe they were selling to investors," Shevelenko told BI. "But in terms of accelerating the development of these technologies, this is a good thing." Perplexity has added DeepSeek's models to its platform.

Take 3: Considering a switch to DeepSeek

On Monday, several platforms that provide AI models for businesses— Groq and Liquid.AI to name two — added DeepSeek's models to their offerings.

On Amazon's internal Slack, one person posted a meme suggesting that developers might drop Anthropic's Claude AI model in favor of DeepSeek's offerings. The post included an image of the Claude model crossed out.

"Friendship ended with Claude. Now DeepSeek is my best friend." the person wrote, according to a screenshot of the post seen by BI, which got more than 60 emoji reactions from colleagues.

Amazon has invested billions of dollars in Anthropic. The cloud giant also provides access to Claude models via its Amazon Web Service platform. And some AWS customers are asking for DeepSeek, BI has exclusively reported.

"We are always listening to customers to bring the latest emerging and popular models to AWS," an Amazon spokesperson said, while noting that customers can access some DeepSeek-related products on AWS right now through tools such as Bedrock.

"We expect to see many more models like this — both large and small, proprietary and open-source — excel at different tasks," the Amazon spokesperson added. "This is why the majority of Amazon Bedrock customers use multiple models to meet their unique needs and why we remain focused on providing our customers with choice — so they can easily experiment and integrate the best models for their specific needs into their applications."

Switching costs for companies creating their own products on top of foundation models are relatively low, which is generating a lot of questions as to whether DeepSeek will overtake other models from Meta, Anthropic, or OpenAI in popularity with enterprises. (It's already number one in Apple's app store.)

DeepSeek, however, is owned by Chinese hedge fund High-Flyer and the same security concerns haunting TikTok may eventually apply to DeepSeek.

"While open-source models like DeepSeek present exciting opportunities, enterprises—especially in regulated industries—may hesitate to adopt Chinese-origin models due to concerns about training data transparency, privacy, and security," Padval said.

Security concerns aside, the software companies that sell APIs to businesses have been adding DeepSeek throughout Monday.

Take 4: Infrastructure players could take a hit

Infrastructure-as-a-service companies, such as Oracle, Digital Ocean, and Microsoft could be in a precarious position should more efficient AI models rule in the future.

"The sheer efficiency of DeepSeek's pre and post training framework (if true) raises the question as to whether or not global hyperscalers and governments, that have and intend to continue to invest significant capex dollars into AI infrastructure, may pause to consider the innovative methodologies that have come to light with DeepSeek's research," wrote Stifel analysts.

If the same quantity of work requires less compute, those selling only compute could suffer, Barclays analysts wrote.

"With the increased uncertainty, we could see share price pressure amongst all three," according to the analysts.

Microsoft and Digital Ocean declined to comment. Oracle did not respond to a request for comment in time for publication.

Take 5: Scaling isn't dead, it's just moved

For months, AI luminaries, including Nvidia CEO Jensen Huang have been predicting a big shift in AI from a focus on training to a focus on inference. Training is the process by which models are created while inference is the type of computing that runs AI models and related tools such as ChatGPT.

The shift in computing's total share to inference has been underway for a while, but now, change is coming from two places. First, more AI users means more inference demand. The second is that part of DeepSeek's secret sauce is how improvement takes place in the inference stage. Nvidia took a positive spin, via a spokesperson.

"DeepSeek is an excellent AI advancement and a perfect example of Test Time Scaling. DeepSeek's work illustrates how new models can be created using that technique, leveraging widely-available models and compute that is fully export control compliant," an Nvidia spokesperson told BI.

"Inference requires significant numbers of NVIDIA GPUs and high-performance networking. We now have three scaling laws: pre-training and post-training, which continue, and new test-time scaling."

Take 6: Open-source changes model building

The most under-hyped part of DeepSeek's innovations is how easy it will now be to take any AI model and turn it into a more powerful "reasoning" model, according to Jack Clark, an Anthropic cofounder, and a former OpenAI employee, wrote about DeepSeek in his newsletter Import AI on Monday.

Clark also explained that some AI companies, such as OpenAI, have been hiding all the reasoning steps that their latest AI models take. DeepSeek's models show all these intermediate "chains of thought" for anyone to see and use. This radically changes how AI models are controlled, Clark wrote.

"Some providers like OpenAI had previously chosen to obscure the chains of thought of their models, making this harder," Clark explained. "There's now an open-weight model floating around the internet which you can use to bootstrap any other sufficiently powerful base model into being an AI reasoner. AI capabilities worldwide just took a one-way ratchet forward."

Take 7: Programmers still matter

DeepSeek improved by using novel programming methods, which Samir Kumar, co-founder and general partner at VC firm Touring Capital, saw as a reminder that humans are still coding the most exciting innovations in AI.

He told BI that DeepSeek is "a good reminder of the talent and skillset of hardcore human low-level programmers."

Got a tip or an insight to share? Contact BI's senior reporter Emma Cosgrove at [email protected] or use the secure messaging app Signal: 443-333-9088.

Contact Pranav from a nonwork device securely on Signal at +1-408-905-9124 or email him at [email protected].

You can email Jyoti at [email protected] or DM her via X @jyoti_mann1

Read the original article on Business Insider
❌
❌