❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Amazon is working on a secret project called 'Kiro,' a new tool that uses AI agents to streamline software coding

6 May 2025 at 10:48
Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Brendan McDermid/REUTERS

  • Amazon is developing a new AI coding tool, internally codenamed Kiro.
  • Kiro aims to enhance software development with AI agent technology and a multi-modal interface.
  • AI coding assistants are rapidly growing, with major tech firms investing heavily in the space.

AI coding assistants are exploding in popularity. Amazon wants a piece of it.

According to an internal document obtained by Business Insider, Amazon Web Services is building a new AI coding tool, codenamed Kiro.

The software development application taps into AI agents to analyze user prompts and existing data, generating code in "near real-time," the document said.

Kiro is a web and desktop app that can be customized to work with first-party and third-party AI agents, Amazon explained in the document. It also taps into knowledge bases, extensions, and themes further enhancing developer productivity.

Kiro will feature a multi-modal interface, allowing developers to input not just text but also visual diagrams and other contextual information, the document stated.

AWS offers an existing AI coding assistant called Amazon Q.Β The document obtained by BI suggests that Kiro may be a broader application that taps into multiple AI agents to automate or speed up many aspects of software development.

The tool is expected to be able to auto-generate technical design documents, flag potential issues, and offer code optimizations, Amazon explained in its internal document.

"There is an opportunity to reimagine how AI is used to build software at an exponentially faster rate of innovation and higher product quality," the company wrote.

Amazon disses other AI coding tools

The internal document critiques existing AI coding tools as being locked into "code-centric" interfaces that slow developers down. Kiro aims to "democratize" software creation, minimizing time-to-code and maximizing productivity, it said.

AWS had considered launching Kiro in late June, according to the document, though it remains uncertain whether that timeline is still in effect.

AWS's spokesperson declined to comment on Kiro specifically, but told BI that the company is working on AI agent features for its existing products, like the Q developer tool.

"AI agents are quickly transforming the developer experience, and we are rapidly creating innovative new approaches to software development that take full advantage of these powerful agentic capabilities," the spokesperson said. "We're only getting started."

'Explosion of coding agents'

AI coding assistants have seen a sharp surge in growth lately.

CEO Andy Jassy called out the growth of AI tools such as Cursor and Vercel during Amazon's earnings call last week, highlighting an "explosion of coding agents" among AWS customers.

Google and Microsoft both said that around 30% of their code is now written by AI. David Sacks, the White House's AI and crypto czar, recently called coding assistants the "first big break-out application of AI," noting explosive growth in tools like Cursor and Replit.

"The ramifications of moving from a world of code scarcity to code abundance are profound," Sacks wrote on X last week.

Startups in the space are attracting significant attention. Anysphere, which built Cursor, raised a huge funding round recently, and OpenAI agreed to buy AI coding startup Windsurf for $3 billion.

AI may change the role of human coders

By 2028, 9 out of 10 enterprise software engineers will use AI coding assistants, up from less than 14% in early 2024, according to Gartner estimates. It's unclear how this will reshape the role of human coders.

Last year, AWS CEO Matt Garman said it's possible that most software developers will not be coding in the future because of new AI tools, and that Amazon has to help employees "upskill and learn about new technologies" to boost their productivity, BI previously reported.

Amazon faced early hurdles with its Q coding assistant, sparking internal concerns over high costs and lackluster performance compared to rivals like Microsoft's Copilot, BI previously reported. The company's spokesperson said user experience with generative AI is "constantly evolving" and pointed to customers, including Deloitte and ADP, that saw productivity gains with Amazon Q.

Amazon believes tools like Kiro will simplify common tasks, such as integrating Stripe payments, while empowering developers to do more with less, according to the document.

"With Kiro, developers read less but comprehend more, code less but build more, and review less but release more," it said.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Amazon's cloud business prepares for 'Buy Canada' questions and other Trump tariff fears

8 May 2025 at 12:28
Amazon Web Services CEO Andy Jassy gives a presentation onstage.
Amazon CEO Andy Jassy

Amazon

  • Amazon Web Services recently issued guidance to staff on tariffs and data sovereignty concerns.
  • AWS aims to reassure customers amid uncertainty over tariffs and other geopolitical issues.
  • Amazon previously hinted that third-party sellers might raise prices on its site due to tariffs.

As tariffs spark growing uncertainty across Amazon's retail operations, the company's cloud division is quietly moving to head off similar concerns from business customers.

According to an internal document obtained by Business Insider, Amazon Web Services has issued new guidance to frontline sales and technical staff, instructing them on how to respond to customer questions about tariffs, data sovereignty, and potential restrictions tied to US government policy.

Among the talking points: If an AWS customer asks about possible price increases due to tariffs, employees are told to avoid direct answers and instead reaffirm pricing terms for those covered under existing Private Pricing Agreements (PPAs).

"In the event that AWS does increase prices, these increases will not change any agreed upon discounts, credits or service-specific rates in your PPA," the internal document stated.

While AWS may be less directly impacted by tariffs than Amazon's e-commerce business, the document reveals the company is concerned enough to prep staff with answers to potential tough customer questions.

The document covers questions ranging from potential price hikes and data-privacy concerns. It even broaches the possibility that US President Donald Trump might ban foreign companies from using AWS.

In a recent CNBC interview, Amazon CEO Andy Jassy acknowledged the situation remains fluid and emphasized efforts by the company's e-commerce business to keep consumer prices low. Still, he hinted that some third-party sellers might raise prices in response to tariffs. He also noted that, despite the uncertainty, Amazon continues its data center expansion.

Amazon, whose stock has dropped about 15% this year, is set to report first-quarter earnings on Thursday.

A spokesperson for the company referred BI to a statement from the internal document:

"We're closely monitoring the situation, and we are working to assess the impact on our business. As we navigate the evolving trade policy landscape, our focus remains on delivering value to our customers and innovating on their behalf."

Do not 'speculate'

Tariff-driven price hikes have already become a flashpoint in Amazon's retail division.

As BI previously reported, internal teams have struggled with forecasting, and vendors say Amazon has offered cost relief in exchange for strict margin guarantees. Meanwhile, third-party sellers say they're being forced to raise prices due to rising import costs.

AWS CEO Matt Garman
AWS CEO Matt Garman

Amazon

What this means for AWS pricing remains unclear. Internal guidance tells employees not to "speculate," citing the rapidly evolving nature of trade policy.

Some cloud industry experts suggest tariffs could squeeze AWS more than the company lets on.

AWS relies heavily on high-end computing gear, much of it manufactured in China or Taiwan. While some semiconductor components were recently exempted from tariffs, other critical data center parts may still be affected. Trump has paused most new tariffs for 90 days, but a 145% tariff on Chinese goods remains in effect.

"AWS and other hyperscalers could choose to absorb the cost or pass it on to customers," said Travis Rehl, CTO of cloud consultancy Innovative Solutions. "I'm unsure which direction they'd take."

Ben Schaechter, CEO of cloud cost optimization firm Vantage, said tariffs could force AWS to tighten future discounts or slow infrastructure growth due to higher hardware costs.

The bigger threat, some say, is reduced cloud spending.

Randall Hunt, CTO of cloud advisory firm Caylent, told BI that customers are already cutting back in broad spending in anticipation of slower growth and rising costs.

Data sovereignty and Trump-era fears

The growing uncertainty over Trump's actions has pushed Amazon to prepare for even more extreme scenarios, including potential US government demands for cloud customer data or a move to block non-US users from accessing AWS.

Those concerns over privacy and data access have grown recently as Trump's tariff-driven trade war increased tensions between the US and European countries.

If asked about potential US government data requests, Amazon instructed employees to emphasize that AWS does not disclose customer information unless legally required and that all requests are thoroughly reviewed.

The guidance also clarifies AWS's position on the CLOUD Act, or Clarifying Lawful Overseas Use of Data Act. The CLOUD Act, passed in 2018, gives US law enforcement agencies the authority to access data held by US-based companies, even if stored abroad.

AWS has not provided enterprise or government customer data stored outside the US since at least 2020 and it will challenge any "over-broad" or unlawful requests, the document stated.

"The CLOUD Act does not provide the U.S. government with unfettered access to data held by cloud providers," the document added.

trump
President Donald Trump.

Chip Somodevilla/Getty Images

On the question of whether Trump could block foreign access to AWS, the document stops short of addressing whether the president has the authority, but notes there's no indication such action is imminent.

In fact, it argues that doing so would contradict the administration's stated goal of supporting US tech companies abroad.

"AWS is closely plugged into US policy and this Administration's efforts, and can confirm we have heard nothing about restricting cloud services to non-US customers in response to addressing trade imbalances or unfair trade barriers, and expect their focus to continue to be on tariffs as the 'rebalancing' mechanism," the document said.

Sanctions and 'Buy Canada'

AWS also addresses fears that US sanctions could restrict access to its services in certain countries. The guidance notes that full country-wide sanctions are rare and that in the past, companies have been given time to wind down operations when sanctions do occur.

"US country-wide sanctions or services restrictions are exceedingly rare," the document said. "But in the theoretical case that such sanctions ever came to pass, AWS would do everything practically possible to provide continuity of service."

Finally, AWS is preparing for patriotic backlash in some markets, such as a potential "Buy Canada" movement. Employees are told to clarify that AWS's Canada office is a registered Canadian corporation headquartered in Toronto, and that customers can choose to store their data locally and encrypt it.

Still, the guidance urges caution. Employees should be careful framing AWS as a "Canadian business," given the complexity of the term.

"Whether AWS is a 'Canadian business' will depend on how that is defined in particular circumstances," the document concludes.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

First, Microsoft tapped the AI data center brakes. Now analysts are worried about Amazon.

23 April 2025 at 02:00
AWS CEO Matt Garman
Amazon Web Services CEO Matt Garman.

Amazon

  • Wall Street analysts say Amazon has paused some data center deals.
  • The data center market may be slightly cooling, after a frenzied couple of years.
  • Microsoft has also taken its foot off the AI accelerator a bit recently.

First, it was Microsoft. Now Amazon is raising eyebrows on Wall Street as fresh signs suggest the cloud giant may be easing off the accelerator in the race to build AI data centers.

Some analysts are concerned that Amazon Web Services, the dominant cloud provider, may be entering a digestion phase that could slow momentum in the data center market.

The speculation gained traction Monday when famed short-seller Jim Chanos posted on X with a simple and ominous remark, alongside an analyst note suggesting caution around AWS's data center plans.

Data Centers #UhOh pic.twitter.com/fRWNFxMV58

β€” James Chanos (@RealJimChanos) April 21, 2025

That note, published by Wells Fargo analysts, cited industry sources who reported this weekend that AWS paused discussions for certain new colocation data center deals, particularly international ones. The analysts stressed that the scale of the pause remains unclear, though they're worried.

"It does appear like the hyperscalers are being more discerning with leasing large clusters of power, and tightening up pre-lease windows for capacity that will be delivered before the end of 2026," the analysts wrote.

Oh no, colo

The same day, TD Cowen analysts published similar findings from their own data center research.

"Our most recent checks point to a pullback in US colocation deals from Amazon," they wrote in a note to investors. Colo deals, as they're known in the industry, involve different companies sharing space in the same data center.

"We are aware of select colocation deals that it walked away from, as well as expansion options that it chose not to exercise," the Cowen analysts added.

They also said that their recent industry checks point to a slowdown in Amazon's AI ambitions in Europe.

"This is a dynamic we will continue to monitor," the analysts wrote.

Three signs of moderation

More broadly, Cowen's analysts have spotted a cooling in the data center market β€” relative to the frenzied activity of recent years.

"We observed a moderation in the exuberance around the outlook for hyperscale demand which characterized the market this time last year," they wrote, laying out three specific signs of calmer times:

  • Data center demand has moderated a bit, particularly in Europe.
  • There has been a broader moderation in the urgency and speed with which cloud companies seek to secure data center capacity.
  • The number of large deals in the market appears to have moderated.

Some context is important here. The AI data center market has gone gangbusters ever since OpenAI's ChatGPT burst onto the scene in late 2022 and showed the potential of generative AI technology.

These signs of moderation are pretty small in relation to this huge trend. However, trillions of dollars in current and planned investments are riding on the generative AI boom. With so much money on the line, any inkling that this rocket ship is not ascending at light speed is unnerving.

Microsoft made similar moves

These signals from Amazon echo similar moves by Microsoft, which recently halted some data center projects.

"Like Microsoft, AWS seems to be digesting recent aggressive leasing activity," the Wells Fargo analysts wrote.

They clarified that this doesn't mean signed deals are being canceled, but rather that AWS is pulling back from early-stage agreements like Letters of Intent or Statements of Qualificationsβ€”common ways that cloud providers work with partners to prepare for data center projects.

Amazon says it still sees strong AI demand

In response to these growing concerns, Kevin Miller, vice president of Global Data Centers at AWS, posted on LinkedIn on Monday to offer some clarity.

"We continue to see strong demand for both Generative AI and foundational workloads on AWS," he wrote.

He explained that AWS has thousands of cloud customers around the world and must weigh multiple solutions to get them the right capacity at the right time.

"Some options might end up costing too much, while others might not deliver when we need the capacity," Miller wrote. "Other times, we find that we need more capacity in one location and less in another. This is routine capacity management, and there haven't been any recent fundamental changes in our expansion plans."

Amazon did not respond to a request for comment from Business Insider.

Digestion or indigestion?

Miller's comments aim to position the pause not as a red flag, but as part of the normal ebb and flow of data center growth.

Historically, these digestion periods, marked by slowing new leases or deferred builds, can last 6 to 12 months before a rebound, the Wells Fargo analysts wrote. Google, for instance, pulled back from leasing in the second half of 2024, only to return aggressively in early 2025, they noted.

The Cowen analysts said Amazon's recent cautious moves to pull back on colocation deals may be related to efforts to increase efficiency across its data center operations. Also, AWS typically doesn't do a lot of colocation deals anyway, preferring instead to build its own data centers, the analysts wrote.

They also wrote that other tech giants, such as Meta and Google, are still aggressively pursuing new capacity.

The bottom line? While AWS appears to be taking a breath, the AI cloud race is far from over. Analysts and investors will watch closely to see whether this pause marks a brief recalibration or a more significant shift in AI strategy.

Read the original article on Business Insider

'Project Greenland': How Amazon overcame a GPU crunch

22 April 2025 at 02:00
An Amazon branded microchip shrinking
Β 

Daniil Dubov/Getty, Tyler Le/BI

  • GPU shortages delayed projects in Amazon's retail division last year.
  • The company created a more efficient approval and monitoring process for internal GPU requests.
  • Amazon says it now has "ample" GPU capacity across the company.

Last year, Amazon's huge retail business had a big problem: It couldn't get enough AI chips to get crucial work done.

With projects getting delayed, the Western world's largest e-commerce operation launched a radical revamp of internal processes and technology to tackle the issue, according to a trove of Amazon documents obtained by Business Insider.

The initiative offers a rare inside look at how a tech giant balances internal demand for these GPU components with supply from Nvidia and other industry sources.

Early in 2024, the generative AI boom was in full swing, with thousands of companies vying for access to the infrastructure needed to apply this powerful new technology.

Inside Amazon, some employees went months without securing GPUs, leading to delays that disrupted timely project launches across the company's retail division, a sector that spans its e-commerce platform and expansive logistics operations, according to the internal documents.

In July, Amazon launched Project Greenland, a "centralized GPU capacity pool" to better manage and allocate its limited GPU supply. The company also tightened approval protocols for internal GPU use, the documents show.

"GPUs are too valuable to be given out on a first-come, first-served basis," one of the Amazon guidelines stated. "Instead, distribution should be determined based on ROI layered with common sense considerations, and provide for the long-term growth of the Company's free cash flow."

Two years into a global shortage, GPUs remain a scarce commodity β€”even for some of the largest AI companies. OpenAI CEO Sam Altman, for example, said in February that the ChatGPT-maker was "out of GPUs," following a new model launch. Nvidia, the dominant GPU provider, has said it will be supply-constrained this year.

However, Amazon's efforts to tackle this problem may be paying off. By December, internal forecasts suggested the crunch would ease this year, with chip availability expected to improve, the documents showed.

In an email to BI, an Amazon spokesperson said the company's retail arm, which sources GPUs through Amazon Web Services, now has full access to the AI processors.

"Amazon has ample GPU capacity to continue innovating for our retail business and other customers across the company," the spokesperson said. "AWS recognized early on that generative AI innovations are fueling rapid adoption of cloud computing services for all our customers, including Amazon, and we quickly evaluated our customers' growing GPU needs and took steps to deliver the capacity they need to drive innovation."

"Shovel-ready"

AWS Andy Jassy
Amazon CEO Andy Jassy

Amazon

Amazon now demands hard data and return-on-investment proof for every internal GPU request, according to the documents obtained by BI.

Initiatives are "prioritized and ranked" for GPU allocation based on several factors, including the completeness of data provided and the financial benefit per GPU. Projects must be "shovel-ready," or approved for development, and prove they are in a competitive "race to market." They also have to provide a timeline for when actual benefits will be realized.

One internal document from late 2024 stated that Amazon's retail unit planned to distribute GPUs to the "next highest priority initiatives" as more supply became available in the first quarter of 2025.

The broader priority for Amazon's retail business is to ensure its cloud infrastructure spending generates the "highest return on investment through revenue growth or cost-to-serve reduction," one of the documents added.

Amazon's new GPU "tenets"

Amazon's retail team codified its approach into official "tenets" β€” internal guidelines that individual teams or projects create for faster decision-making. The tenets emphasize strong ROI, selective approvals, and a push for speed and efficiency.

And if a greenlit project underdelivers, its GPUs can be pulled back.

Here are the 8 tenets for GPU allocation, according to one of the Amazon documents:

  1. ROl + High Judgment thinking is required for GPU usage prioritization. GPUs are too valuable to be given out on a first-come, first-served basis. Instead, distribution should be determined based on ROl layered with common sense considerations, and provide for the long-term growth of the Company's free cash flow. Distribution can happen in bespoke infrastructure or in hours of a sharing/pooling tool.
  2. Continuously learn, assess, and improve: We solicit new ideas based on continuous review and are willing to improve our approach as we learn more.
  3. Avoid silo decisions: Avoid making decisions in isolation; instead, centralize the tracking of GPUs and GPU related initiatives in one place.
  4. Time is critical: Scalable tooling is a key to moving fast when making distribution decisions which, in turn, allows more time for innovation and learning from our experiences.
  5. Efficiency feeds innovation: Efficiency paves the way for innovation by encouraging optimal resource utilization, fostering collaboration and resource sharing.
  6. Embrace risk in the pursuit of innovation: Acceptable level of risk tolerance will allow to embrace the idea of 'failing fast' and maintain an environment conducive to Research and Development.
  7. Transparency and confidentiality: We encourage transparency around the GPU allocation methodology through education and updates on the wiki's while applying confidentiality around sensitive information on R&D and ROI sharable with only limited stakeholders. We celebrate wins and share lessons learned broadly.
  8. GPUs previously given to fleets may be recalled if other initiatives show more value. Having a GPU doesn't mean you'll get to keep it.

Project Greenland

Matt Garman presenting onstage.
AWS CEO Matt Garman

Amazon

To address the complexity of managing GPU supply and demand, Amazon launched a new project called Greenland last year.

Greenland is described as a "centralized GPU orchestration platform to share GPU capacity across teams and maximize utilization," one of the documents said.

It can track GPU usage per initiative, share idle servers, and implement "clawbacks" to reallocate chips to more urgent projects, the documents explained. The system also offers a simplified networking setup and security updates, while alerting employees and leaders to projects with low GPU usage.

This year, Amazon employees are "mandated" to go through Greenland to obtain GPU capacity for "all future demands," and the company expects this to increase efficiency by "reducing idle capacity and optimizing cluster utilization," it added.

$1 billion investment in AI-related projects

Amazon's retail business is wasting no time putting its GPUs to work. One document listed more than 160 AI-powered initiatives, including the Rufus shopping assistant and Theia product image generator.

Other AI projects in the works include, per the document:

  • A vision-assisted package retrieval (VAPR) service that uses computer-vision technology to help drivers quickly identify and pick the correct packages from vans at delivery stops.
  • A service that automatically pulls in data from external websites to create consistent product information.
  • A new AI model that optimizes driver routing and package handling to reduce delivery times and improve efficiency.
  • An improved customer service agent that uses natural language to address customer return inquiries.
  • A service that automates seller fraud investigations and verifies document compliance.

Last year, Amazon estimated that AI investments by its retail business indirectly contributed $2.5 billion in operating profits, the documents showed. Those investments also resulted in approximately $670 million in variable cost savings.

It's unclear what the 2025 estimates are for those metrics. But Amazon plans to continue spending heavily on AI.

As of early this year, Amazon's retail arm anticipated about $1 billion in investments for GPU-powered AI projects. Overall, the retail division expects to spend around $5.7 billion on AWS cloud infrastructure in 2025, up from $4.5 billion in 2024, the internal documents show.

Improving capacity

Last year, Amazon's heavy slate of AI projects put pressure on its GPU supply.

Throughout the second half of 2024, Amazon's retail unit suffered a supply shortage of more than 1,000 P5 instances, AWS's cloud server that contains up to 8 Nvidia H100 GPUs, said one of the documents from December. The P5 shortage was expected to slightly improve by early this year, and turn to a surplus later in 2025, according to those December estimates.

Amazon's spokesperson told BI those estimates are now "outdated," and there's currently no GPU shortage.

AWS's in-house AI chip Trainium was also projected to satisfy the retail division's demand by the end of 2025, but "not sooner," one of the documents said.

Amazon's improving capacity aligns with Andy Jassy's remarks from February, when he said the GPU and server constraints would "relax" by the second half of this year.

But even with these efforts, there are signs that Amazon still worries about GPU supply.

A recent job listing from the Greenland team acknowledged that explosive growth in GPU demand has become this generation's defining challenge: "How do we get more GPU capacity?"

Do you work at Amazon? Got a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

An Amazon AI leader tells BI why vibe coding is here to stay

10 April 2025 at 07:19
Deepak Singh AWS
Deepak Singh leads Amazon Web Services' developer agents and experiences division.

Amazon Web Services

  • Deepak Singh leads AWS' Q Developer team and thinks vibe coding can be a game changer for developers.
  • Singh told BI that conversations with customers haven't been about replacing human developers with AI.
  • "The reason vibe coding is such a vibe" is because developers enjoy using it, Singh said.

Vibe coding is shaking up software development, and an Amazon executive in charge of developing the company's AI agents thinks it's here to stay.

Deepak Singh, vice president of Amazon Web Services' developer agents and experiences division, told Business Insider that software developers are using increasingly powerful AI agents to improve their productivity.

"The way I like to state it is your AI software has gone from 'help me type faster' β€” a coding companion β€” to being a true pair programmer that helps you build your software by working with you," Singh said.

Vibe coding is a term coined in February by OpenAI cofounder Andrej Karpathy to describe giving AI prompts to write code. As he puts it, developers can "fully give in to the vibes" and "forget the code even exists."

While some fear vibe coding may mean fewer software developer roles, Singh said conversations with customers haven't revolved around finding ways of replacing human workers with AI agents.

Others, like Singh, see vibe coding as a way for developers to free up their time to focus on more important aspects of their jobs β€” such as problem-solving.

"A really good engineer is one that can take a problem, shine a light upon it, and clarify," Singh said, pointing to tenets Amazon lays out for its legions of software developers and engineers. "It's not just about writing the most complex code. It's about simplifying a complex problem."

There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…

β€” Andrej Karpathy (@karpathy) February 2, 2025

Developers have long been interested in how AI can help them. When ChatGPT first arrived in late 2022, developers were quick to adopt the generative AI tool to help them code β€” even if the chatbot was prone to generating errors.

Back then, developers could find value by getting generative AI to help speed up the line-by-line process of writing code. They could "add a command and have the command write the code for them," as Singh put it.

This year, tech companies have been launching a growing number of AI models with the ability to reason and spend longer on problems β€” a key reason vibe coding is having its moment. Amazon made Q Developer, its own AI assistant for software developers, generally available in April 2024. This week, the company unveiled a feature for developers to interact with Q in different languages, such as Spanish, Korean, or Hindi.

For Singh, the best developers are those who are "very clear in the guidance they're giving the AI." They're the ones who are able to "move very quickly," he said.

Some AWS customers appear to be successfully vibe coding. Singh gave the example of the National Australia Bank, claiming that half the code that goes into production is from Q Developer, not handwritten by a human at the bank, he said.

"The reason vibe coding is such a vibe, no pun intended, is because developers enjoy it and they're able to make progress," Singh said.

Read the original article on Business Insider

CoreWeave IPO debut: Pay attention to this potentially expensive hardware problem

28 March 2025 at 09:20
Michael Intrator, cofounder and CEO of CoreWeave, speaks on stage at the Web Summit 2024 in Lisbon
Michael Intrator, the CEO of CoreWeave, is on the cusp of a big initial public offering.

Bruno de Carvalho/SOPA Images/LightRocket via Getty Images

  • Rapid AI advancements may reduce the useful life of CoreWeave's Hopper-based Nvidia GPUs.
  • CoreWeave has a lot of Hopper-based GPUs, which are becoming outdated due to the Blackwell rollout.
  • Amazon recently cut the estimated useful life of its servers, citing AI advancements.

I recently wrote about Nvidia's latest AI chip-and-server package and how this new advancement may dent the value of the previous product.

Nvidia's new offering likely caused Amazon to reduce the useful life of its AI servers, which took a big chunk out of earnings.

Other Big Tech companies, such as Microsoft, Google, and Meta, may have to take similar tough medicine, according to analysts.

This issue might also impact Nvidia-backed CoreWeave, which is doing an IPO. Its shares listed on Friday under the ticker "CRWV." The company is a so-called neocloud, specializing in generative AI workloads that rely mostly on Nvidia GPUs and servers.

Like its bigger rivals, CoreWeave has been buying oodles of Nvidia GPUs and renting them out over the internet. The startup had deployed more than 250,000 GPUs by the end of 2024, per its filing to go public.

These are incredibly valuable assets. Tech companies and startups have been jostling for the right to buy Nvidia GPUs in recent years, so any company that has amassed a quarter of a million of these components has done very well.

There's a problem, though. AI technology is advancing so rapidly that it can make existing gear obsolete, or at least less useful, more quickly. This is happening now as Nvidia rolls out its latest AI chip-and-server package, Blackwell. It's notably better than the previous version, Hopper, which came out in 2022.

Veteran tech analyst Ross Sandler recently shared a chart showing that the cost of renting older Hopper-based GPUs has plummeted as the newer Blackwell GPUs become more available.

A chart showing the cost of renting Nvidia H100 GPUs
A chart showing the cost of renting Nvidia H100 GPUs

Ross Sandler/Barclays Capital

The majority of CoreWeave's deployed GPUs are based on the older Hopper architecture, according to its latest IPO filing from March 20.

Sometimes, in situations like this,Β companies have to adjust their financials to reflect the quickly changing product landscape. This is done by reducing the estimated useful life of the assets in question. Then, through depreciation, the value of assets is reduced over a short time period to reflect things like wear and tear and, ultimately, obsolescence. The faster the depreciation, the bigger the hit to earnings.

Amazon's AI-powered depreciation

Amazon, the world's largest cloud provider, just did this. On a recent conference call with analysts, the company "observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning."

That caused Amazon Web Services to decrease the useful life of some of its servers and networking gear from six years to five years, beginning in January.

Sandler, the Barclays analyst, thinks other tech companies may have to do the same, which could cut operating income by billions of dollars.

Will CoreWeave have to do the same, just as it's trying to pull off one of the biggest tech IPOs in years?

I asked a CoreWeave spokeswoman about this, but she declined to comment. This is not unusual, because companies in the midst of IPOs have to follow strict rules that limit what they can say publicly.Β 

CoreWeave's IPO risk factor

CoreWeave talks about this issue in its latest IPO filing, writing that the company is always upgrading its platform, which includes replacing old equipment.

"This requires us to make certain estimates with respect to the useful life of the components of our infrastructure and to maximize the value of the components of our infrastructure, including our GPUs, to the fullest extent possible."

The company warned those estimates could be inaccurate. CoreWeave said its calculations involve a host of assumptions that could change and infrastructure upgrades that might not go according to plan β€” all of which could affect the company, now and later.

This caution is normal because companies have to detail everything that could hit their bottom line, from pandemics to cybersecurity attacks.

As recently as January 2023, CoreWeave was taking the opposite approach to this situation, according to its IPO filing. The company increased the estimated useful life of its computing gear from five years to six years.Β That change reduced expenses by $20 million and boosted earnings by 10 cents a share for the 2023 year.

If the company now follows AWS and reduces the useful life of its gear, that might dent earnings. Again, CoreWeave's spokeswoman declined to comment, citing IPO rules.

An important caveat: Just because one giant cloud provider made an adjustment like this, it doesn't mean others will have to do the same. CoreWeave might design its AI data centers differently, somehow making Nvidia GPU systems last longer or become less obsolete less quickly, for instance.Β 

It'sΒ also worth noting that other big cloud companies, including Google, Meta, and Microsoft, have increased the estimated useful life of their data center equipment in recent years.

Google and Microsoft's current estimates are six years, like CoreWeave's, while Meta's is 5.5 years.

However, Sandler, the Barclays analyst, thinks some of these big companies will follow AWS and shorten these estimates.Β 

Read the original article on Business Insider

Nvidia CEO Jensen Huang joked about something that could cost his biggest customers billions of dollars

22 March 2025 at 02:00
A man in a dark suit and light shirt sits in a chair on a stage smiling.
Nvidia CEO Jensen Huang.

Chip Somodevilla/Getty Images

  • Nvidia's new Blackwell GPUs mean the older Hopper models are less useful, affecting cloud giants.
  • Rapid tech advancements may force cloud giants to adjust asset depreciation, denting earnings.
  • Amazon leads in adjusting server lifespan. Meta and Google could see profit hits.

Nvidia CEO Jensen Huang made a joke this week that his biggest customers probably won't find funny.

"I said before that when Blackwell starts shipping in volume, you couldn't give Hoppers away," he said at Nvidia's big AI conference Tuesday.

"There are circumstances where Hopper is fine," he added. "Not many."

He was talking about Nvidia's latest AI chip-and-server package, Blackwell. It's notably better than the previous version, Hopper, which came out in 2022.

Big cloud companies, such as Amazon, Microsoft, and Google, buy a ton of these GPU systems to train and run the giant models powering the generative AI revolution. Meta has also gone on a GPU spending spree in recent years.

These companies should be happy about an even more powerful GPU like Blackwell. It's generally great news for the AI community. But there's a problem, too.

AI obsolescence

When new technology like this improves at such a rapid pace, the previous versions become obsolete, or at least less useful, much faster.

This makes these assets less valuable, so the big cloud companies may have to adjust. This is done through depreciation, where the value of assets are reduced over time to reflect things like wear and tear and ultimately obsolescence. The faster the depreciation, the bigger the hit to earnings.

Ross Sandler, a top tech analyst at Barclays, warned investors on Friday that the big cloud companies and Meta will probably have to make these adjustments, which could significantly reduce profits.

"Hyperscalers are likely overstating earnings," he wrote.

Google and Meta did not respond to Business Insider's questions about this on Friday. Microsoft declined to comment.

Amazon takes the plunge first

Take the example of Amazon Web Services, the largest cloud provider. In February, it became the first to take the pain.

CFO Brian Olsavsky said on Amazon's earnings call last month that the company "observed an increased pace of technology development, particularly in the area of artificial intelligence and machine learning."

"As a result, we're decreasing the useful life for a subset of our servers and networking equipment from 6 years to 5 years, beginning in January 2025," Olsavsky said, adding that this will cut operating income this year by about $700 million.

Then, more bad news: Amazon "early-retired" some of its servers and network equipment, Olsavsky said, adding that this "accelerated depreciation" cost about $920 million and that the company expects it will decrease operating income in 2025 by about $600 million.

A much larger problem for others

Sandler, the Barclays analyst, included a striking chart in his research note on Friday. It showed the cost of renting H100 GPUs, which use Nvidia's older Hopper architecture. As you can see, the price has plummeted as the company's new, better Blackwell GPUs became more available.

A chart showing the cost of renting Nvidia H100 GPUs
A chart showing the cost of renting Nvidia H100 GPUs.

Ross Sandler/Barclays Capital

"This could be a much larger problem at Meta and Google and other high-margin software companies," Sandler wrote.

For Meta, he estimated that a one-year reduction in the useful life of the company's servers would increase depreciation in 2026 by more than $5 billion and chop operating income by a similar amount.

For Google, a similar change would knock operating profit by $3.5 billion, Sandler estimated.

An important caveat: Just because one giant cloud provider has already made an adjustment like this, it doesn't mean the others will have to do exactly the same thing. Some companies might design their AI data centers differently, somehow making Nvidia GPU systems last longer or become less obsolete less quickly.

The time has come

When the generative AI boom was picking up steam in the summer of 2023, Bernstein analysts already started worrying about this depreciation.

"All those Nvidia GPUs have to be going somewhere. And just how quickly do these newer servers depreciate? We've heard some worrying timetables," they wrote in a note to investors at the time.

One Bernstein analyst, Mark Shmulik, discussed this with my colleague Eugene Kim.

"I'd imagine the tech companies are paying close attention to GPU useful life, but I wouldn't expect anyone to change their depreciation timetables just yet," he wrote in an email to BI at the time.

Now, that time has come.

Read the original article on Business Insider

Amazon CEO Andy Jassy criticizes manager fiefdoms and stresses the need for 'meritocracy' in a leaked recording

21 March 2025 at 12:46
Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Amazon

  • Amazon CEO Andy Jassy wants to reduce management layers and bureaucracy.
  • He told employees that building a giant team and fiefdom wouldn't help them get promoted.
  • He also encouraged staff to act like owners and stay aware of industry competition.

Amazon CEO Andy Jassy really wants to reduce management layers.

During a recent internal all-hands meeting, Jassy reiterated his commitment to de-layering, a move he thinks will cut bureaucracy. Amazon previously announced a plan to increase the ratio of individual contributors to managers by 15% by the end of March.

At the Tuesday meeting, the CEO said Amazon is actively changing how it thinks about promotions. He stressed the best leaders are those who "get the most done with the least amount of resources required to do the job," according to a recording of the meeting obtained by Business Insider.

Jassy added that "every new project shouldn't take 50 or more people to do it," and reminded employees that some of AWS's most successful products initially started with teams of about a dozen.

"The way to get ahead at Amazon is not to go accumulate a giant team and fiefdom," Jassy said. "There's no award for having a big team. We want to be scrappy about us to do a lot more things."

Jassy's comments were in response to a question about his intention to run Amazon like "the world's largest startup." In addition to the manager shake-up, Jassy underscored the need to build a culture of speed and meritocracy.

Amazon hasn't shared how exactly it is reducing management layers. Some managers were told to increase their number of direct reports, make fewer senior hires, and cut pay for certain employees, BI previously reported.

In an email to BI, an Amazon spokesperson said the company has now completed this process, which impacted a "relatively small subset of employees." The spokesperson added that Amazon combined teams and moved managers to individual contributor roles to reach its goal, and this "did not equate to eliminating 15% of manager roles."

"In September 2024, we shared with employees that we set a goal to increase the ratio of individual contributors to managers by 15% across our organizations because it was the right time to bring us closer to customers and reinforce our culture of ownership. There are a number of ways to achieve that increase. We've now reached that goal, which we believe will allow our teams to move even faster as they innovate for customers," the spokesperson said.

Meritocracy over bureaucracy

In September, Amazon also created a "No Bureaucracy" email alias, where employees could report unnecessary processes that needed to be fixed.

Jassy said during the Tuesday meeting that he's read every single one of the over a thousand emails he's received so far and that the company has made more than 375 changes as a result.

"We are, as a team, committed to getting rid of the bureaucracy," Jassy said.

When companies grow, it's natural to put more processes in place, Jassy added. But companies often make the mistake of focusing too much on adding more people and managing them versus improving the customer experience, he said.

"It's not how charismatic you are. It's not whether you're really good at managing up or managing sideways," he said. "What matters is what we actually get done for customers. That is what we reward. It's a meritocracy."

'It is your company'

Jassy also urged employees to "move fast and act like owners."

He said big companies tend to become slow and indecisive. This is a particularly big risk for Amazon, given the intense competition it faces. Competitors include the "most technically able, most hungry" companies in the world, including startups "working seven days a week, 15 hours a day," he said.

"One of the strengths of Amazon over the first 29 years is that we've hired really smart, motivated, inventive, ambitious people who have been great owners," Jassy said. "What would I do if this was my company? And by the way, it is your company. This is all of our company."

Another point Jassy made during the meeting was to be "hyper-aware" of what's going around Amazon. That means keeping track of not just Amazon's own goals, but other technology and companies that can be inspiring, he said.

"Great companies, startups who have that real missionary zeal and succeed are always looking around," Jassy said. "When you're inventing, you need that blind faith that you're building something maybe others haven't thought of, but you got to keep checking in to make sure it's the best solution available for people."

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Amazon employees are warning customers about DeepSeek privacy concerns — and pushing Amazon's own AI instead

17 March 2025 at 02:00
Person working on two computers with the amazon logo on one computer and the deep seek logo on the other and papers flying in the air around him
Β 

Amazon; DeepSeek; Getty Images; Ava Horton/BI

  • Amazon quickly integrated DeepSeek AI models into Bedrock due to high demand in January.
  • Amazon wants to promote its products as faster and more secure alternatives to DeepSeek.
  • The cloud giant warns employees not to share confidential information with DeepSeek.

In late January, as DeepSeek sent shockwaves through the tech industry, Amazon saw a huge spike in companies requesting access to the Chinese AI model on its development tool Bedrock.

Amazon swiftly added DeepSeek to Bedrock. Some employees who spoke to Business Insider felt the approval process was unusually fast. Amazon's CEO Andy Jassy later told investors the company moved quickly to meet customer demand.

DeepSeek's sudden rise has spurred swift reactions inside Amazon. The repercussions have been felt across product updates, sales pitches, and development efforts, according to internal documents seen by BI and people familiar with the matter.

The responses show how fast-moving AI discoveries can whipsaw even the biggest and smartest technology companies. Amazon rivals, including OpenAI, Google, Meta, and Microsoft have also been forced to respond to the DeepSeek impact.

An Amazon spokesperson said the company's strategy has always focused on providing secure access to the latest models through AWS, giving customers control over their data to customize and build generative AI applications.

"Delivering DeepSeek models is an example of that," the spokesperson added in a statement to BI. "We're extremely pleased with the feedback that we've received from the thousands of customers who have already deployed DeepSeek on AWS."

'Privacy concerns'

DeepSeek's AI models upended the tech world in January with their powerful performance and low cost. Tech stocks plunged as investors questioned US tech companies' massive spending on computing products.

For now, Amazon continues to add DeepSeek-related features. Earlier this week, the cloud giant made it easier to use DeepSeek's reasoning model on Bedrock, offering a "fully managed" service with built-in security and monitoring features. Amazon Web Services CEO Matt Garman wrote on LinkedIn that there's been "incredible demand" for DeepSeek.

AWS CEO Matt Garman
AWS CEO Matt Garman

Noah Berger/Noah Berger

It's not just the Bedrock team scurrying to make changes. One person said that DeepSeek has sparked many new discussions across Amazon.

One particular topic has been how Amazon should position itself against DeepSeek.

AWS has encouraged employees to highlight privacy and security concerns around DeepSeek when they speak to customers, according to internal guidelines seen by BI. They should remind customers of the importance of "model choice" and pitch AWS's Nova AI models as an alternative, the document added.

The guidelines also suggest promoting Bedrock, which AWS says provides a more secure and private method of accessing AI models. With Bedrock, customer data is not shared with model providers or used to improve base models. Amazon expects most customers to use open-source versions of DeepSeek models, not those directly offered by the Chinese company, it added.

"DeepSeek's privacy policy states they collect user data and may store them on servers in China," the guidelines said. "We are aware of the privacy concerns on DeepSeek models."

Nova is faster and safer

AWS has also told employees to leverage DeepSeek's shortcomings when selling Nova.

The guidelines say Nova models are faster than DeepSeek's models, based on third-party benchmark data, and more secure given AWS's more robust "responsible AI" standards.

Nova is more comparable to DeepSeek's V3 model than the R1 reasoning model and they serve different needs, the guideline also stated. However, the V3 is a "text-only model," while Nova supports image and video understanding, the document emphasized.

AWS is now working on its own reasoning model that would directly compete with DeepSeek's R1, BI previously reported. While AWS has been developing the new model for months, DeepSeek's recent emergence added more pressure to expedite its progress, one of the people familiar with the matter said.

Efforts to study DeepSeek's technology are in the works at Amazon, and AWS wants to apply some of the training techniques DeepSeek used in its new reasoning model, some of the people added.

Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Noah Berger/Noah Berger

During last month's earnings call, Jassy said Amazon was "impressed" with a lot of DeepSeek's training methodologies. Those include "flipping the sequencing of reinforcement training" and some of its "inference optimizations," Jassy explained.

"For those of us who are building frontier models, we're all working on the same types of things and we're all learning from one another. I think you have seen and will continue to see a lot of leapfrogging between us," he said.

'Deepseek-interest' channel

On the day DeepSeek roiled the stock market in late January, Amazon employees created an internal Slack channel called "Deepseek-interest," according to a screenshot seen by BI. More than 1,300 employees joined the channel in just a few days.

One person wrote on this Slack channel that he was "surprised" there wasn't much pushback against DeepSeek given its China origin and "security concerns." Another person asked for Neuron, AWS's in-house chip development platform, to support DeepSeek models. A third person wrote about a customer complaint over errors they saw while using DeepSeek on Bedrock.

Amazon also held an internal DeepSeek learning session in late January, according to one of the Slack messages. The event covered AWS's messaging, positioning, and key differentiators versus DeepSeek.

Moving on from DeepSeek

Meanwhile, Amazon now discourages employees from using DeepSeek on their work computers, according to several people familiar with the matter. Staff now get a warning to not share confidential information with DeepSeek's app, the same message they see when using ChatGPT at work.

Perhaps in a sign of how fast things change in AI, some Amazon employees already seem to be moving on from DeepSeek to other Chinese AI offerings.

One person wrote in the internal Slack channel that AWS should start considering other China-based models, like Alibaba's Qwen.

"DeepSeek is already the past day," this person wrote. "When do we have Qwen2.5-Max?"

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Amazon VP who oversaw flagship AI product is leaving after a big shake-up

10 March 2025 at 11:00
AWS logo
An AWS office

Amazon

  • Amazon's VP of AI/ML services is leaving after a major reorg.
  • The exec ran several teams, including AWS's flagship AI product Bedrock.
  • AWS recently created a new Agentic AI team under VP Swami Sivasubramanian.

The VP in charge of Amazon's flagship AI product Bedrock is stepping down a year after joining the company.

Baskar Sridharan, Amazon Web Services's VP of AI/ML services and Data Services & Infrastructure, plans to leave following a recent reorganization that consolidated several teams within the cloud business, according to people familiar with the matter. An Amazon spokesperson declined to comment.

Sridharan oversaw the strategic direction and development of AWS's biggest AI products, including Bedrock and Sagemaker, according to a company profile. Bedrock is a development tool that gives access to multiple models and has served as AWS's main horse in the AI race. Sridharan joined AWS in May, after working at Google Cloud and Microsoft for more than 20 years.

Sridharan's departure is part of a big shake-up that created a new Agentic AI team last week within AWS. Swami Sivasubramanian, VP of AI and data, was promoted to lead the team and now reports directly to AWS CEO Matt Garman, according to an internal email seen by Business Insider. Sridharan reported to Sivasubramanian.

As part of this change, the Bedrock and Sagemaker AI organizations will move under the AWS compute team led by VP Dave Brown, according to another internal email seen by BI. Prasad Kalyanaraman, VP of AWS infrastructure services, will take over several networking teams, and VP of technology Mai-Lan Tomsen Bukovec will add some of the data service units. Amazon's Q chatbot teams will join the new Agentic AI group.

Sridharan is part of several recent high-profile departures at AWS. Former AWS CEO Adam Selipsky, CMO Raejeanne Skillern, CFO Richard Puccio, and AI VP Matt Wood left the company last year.

Amazon faces fierce competition in the AI space, with companies ranging from Google to OpenAI all vying for supremacy. In one of the emails, Garman said the new Agentic AI team has the potential to build Amazon's "next multi-billion-dollar business."

"We're in the midst of the most significant technological transformation since the inception of cloud computing, and our customers are seeing unprecedented productivity gains through generative AI," Garman wrote.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

'BS and hype': Amazon execs cast doubt on Microsoft's quantum-computing claims in private discussions

7 March 2025 at 02:00
Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Amazon

  • Amazon executives are skeptical about Microsoft's quantum computing breakthrough claims.
  • One Amazon exec called it "next level (in BS and hype)."
  • Recent quantum announcements by tech giants may be more hype than substance, other experts suggest.

Microsoft claimed a major quantum computing breakthrough last month. Amazon executives aren't buying it.

On February 19, Microsoft unveiled a new quantum processor called Majorana 1. The company said the chip uses a new type of architecture that could allow quantum computers to store way more data and perform much more complex calculations.

On the same day, Simone Severini, Amazon's head of quantum technologies, emailed CEO Andy Jassy casting doubt on Microsoft's claims, according to a copy of the email obtained by Business Insider.

Severini wrote that Microsoft's underlying scientific paper, released in Nature, "doesn't actually demonstrate" the claimed achievement and only showed that the new chip "could potentially enable future experiments."

He also noted that Microsoft has a checkered history of "several retracted papers due to scientific misconduct" in the quantum computing space, adding that some of the company's earlier research had to be withdrawn. The email was also shared with several other executives, including Amazon Web Services CEO Matt Garman and SVP James Hamilton.

"This seems to be a meaningful technical advancement, but it's far different from the breakthrough being portrayed in the media coverage," Severini wrote in the email.

It's also far from clear that Microsoft's architecture, which uses "topological qubits," will provide "any real performance benefit," he added.

AWS head of quantum technologies Simone Severini
AWS head of quantum technologies Simone Severini

Amazon

'Next level (in BS and hype)'

In internal Slack messages, seen by BI, Amazon execs and employees were more vocal about their frustration with Microsoft's claims.

Oskar Painter, Amazon's head of quantum hardware, stressed the need to "push back on BS statements like S. Nadella's," likely in reference to the Microsoft CEO Satya Nadella's social media post proclaiming major advancements with the Majorana chip.

Painter, who also teaches at Caltech, said he has more positive views of Google and IBM's quantum-computing efforts. Microsoft, on the other hand, is "next level (in BS and hype)," he wrote in an internal Slack message seen by BI.

One Amazon employee joked about receiving texts from friends asking if this would "change the world," while another poked fun at tech companies using grandiose statements to promote their quantum efforts.

"Seems as if Google, IBM and Microsoft's marketing teams are making faster progress than their hardware R&D teams," this person wrote in Slack.

'Insignificant' compared to what is needed

Tech companies have been working on quantum computing for years. The hope is to one day create machines that enable significant strides in areas like drug discovery or chemical compound creation. In recent months, Amazon and Google have also unveiled new quantum chips.

But their efforts to outduel each other could generate more hype than substance, according to industry experts.

Arka Majumdar, a computer engineering professor at the University of Washington, told BI that Microsoft's technological achievements are impressive but "insignificant" compared to what is needed to create a useful quantum computer. He added Microsoft's claims appear "sensational" and "overhyped," given they haven't reached meaningful scale.

Scott Aaronson, a renowned quantum computing researcher and computer science professor at the University of Texas at Austin, pointed out in a blog post that Microsoft's claim to have created a topological qubit "has not yet been accepted by peer review."

The peer review file of Microsoft's Nature report states that the "results in this manuscript do not represent evidence for the presence of Majorana zero modes in the reported devices," and the work is intended to introduce an architecture that "might enable fusion experiments using future Majorana zero modes."

In an email to BI, a Microsoft spokesperson said the Nature paper was published a year after its submission, and the company has made "tremendous progress" in that time. Microsoft plans to share additional data "in the coming weeks and months," the spokesperson added.

"Discourse and skepticism are all part of the scientific process," Microsoft's spokesperson said. "That is why we are dedicated to the continued open publication of our research, so that everyone can build on what others have discovered and learned."

Quantum timelines

Amazon and Microsoft also have differing views on the expected timeline for practical quantum usage.

Microsoft's spokesperson told BI, "Utility-scale quantum computers are just years away, not decades." Amazon's spokesperson, however, expects another couple of decades before mainstream adoption.

"While quantum computers may not be commercially viable for 10-20 years, bringing quantum computing to fruition is going to take an extraordinary effort, including sustained interest and investment across the industry starting now," Amazon's spokesperson told BI.

Chris Ballance, CEO of quantum-computing startup Oxford Ionics, told BI that Amazon's recent quantum chip announcement was equally vague with little substance. Other industry experts previously told BI that they were unsure if the technology has advanced as far as these companies claim.

Still, Ballance said the recent array of quantum news is a "good sign" for the industry, which is still in its "very early days."

"It shows that people are waking up to the value of quantum computing and the need to address it in their roadmaps," Ballance said.

Have a tip?

Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Amazon is working on a new 'reasoning' AI model that competes with OpenAI and Anthropic

4 March 2025 at 02:00
AWS VP Ruba Borno (left) and CEO Matt Garman
AWS VP Ruba Borno (left) and CEO Matt Garman

Amazon

  • Amazon plans to launch a new AI model with advanced reasoning capabilities.
  • The model aims to offer hybrid reasoning, a mix of quick answers and more complex thinking.
  • Amazon is prioritizing cost efficiency and external benchmark performance.

Amazon is building its own AI model that incorporates advanced "reasoning" capabilities, Business Insider has learned.

The offering is tentatively scheduled to launch by June under the Nova brand, a group of generative AI models Amazon unveiled late last year, according to a person directly involved in the project. This person asked not to be identified because they were not authorized to speak with the media.

Amazon wants the new model to take a "hybrid reasoning" approach that provides quick answers and more complex extended thinking within a single system, this person added. An Amazon spokesperson didn't respond to a request for comment.

Reasoning models have recently become the next frontier in AI. They often work more slowly but can also tackle tougher problems by trying multiple solutions and backtracking via chain-of-thought techniques. Companies including Google, OpenAI, and Anthropic have released their own reasoning models recently, while DeepSeek drew a lot of attention for building a similar offering more efficiently.

One of Amazon's priorities is to make its Nova reasoning model more price-efficient than competitors, which include OpenAI's o1, Anthropic's Claude 3.7 Sonnet, and Google's Gemini 2.0 Flash Thinking, according to the person involved in the project.

Amazon previously said that its existing in-house Nova models are at least 75% cheaper than third-party models available via its Bedrock AI development platform.

Another goal is to get it Amazon's upcoming reasoning model ranked in the top 5 for performance, based on external benchmarks that evaluate software development and math skills, such as the SWE, Berkeley Function Calling Leaderboard, and AIME, among others, this person added.

The move reflects Amazon's commitment to invest in its own family of AI models, even as it preaches the need to offer a variety of model choices through Bedrock. Amazon's AGI team, run by head scientist Rohit Prasad, has been working on this new model.

It also puts Amazon in more direct competition with Anthropic, the AI startup that just launched its newest model. Claude 3.7 Sonnet uses a similar hybrid approach, combining quick answers and longer chain-of-thought outputs.

Amazon has invested $8 billion in Anthropic so far, and the two companies have been close partners, collaborating in areas including AI chips and cloud computing.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Talen CEO says Amazon is investing 'sweat equity' at its data center next door to the Susquehanna nuclear plant

27 February 2025 at 17:04
susquehanna nuclear plant
The Susquehanna nuclear plant in Berwick, Pennsylvania.

AP Photo/Ted Shaffrey

  • Talen Energy is confident in its power deal with Amazon despite a regulatory challenge.
  • CEO Mac McFarland sought to reassure wary investors on the company's Q4 earnings call on Thursday.
  • AI-driven electricity demand has boosted Talen's stock in the last year.

Despite regulatory challenges, Talen Energy is forging ahead with its deal to provide power to an Amazon data center adjacent to its Susquehanna nuclear power plant in Pennsylvania, CEO Mac McFarland said Thursday on the company's fourth-quarter earnings call.

"We have an existing relationship with aΒ hyperscalerΒ who shows no signs of pulling back on growth and has invested material capital and sweat equity into the Susquehanna agreement to date and on an ongoing basis," McFarland said.

McFarland was referring to its current interconnection service agreement with Amazon Web Services for 300 megawatts of power. The AI boom has led to a surge in electricity demand coming from data centers, and Big Tech companies like Amazon are increasingly signing massive deals with independent power providers like Talen to meet it.

When AWS and Talen first announced their deal last year, the companies said they'd contracted for 960 megawatts of co-located capacity. In November, the Federal Energy Regulatory Commission rejected an interconnection service agreement that would have allowed the companies to expand their power purchase agreement beyond the initial 300 megawatts. Talen appealed the rejection in federal court in January.

"Talen has time to convert the contract to a different commercial arrangement and/or resolve the regulatory questions, and we are confident and focused on executing on one or both of those options over time," said McFarland.

Talen reported a net income of $998 million attributable to stockholders for 2024. However, by Thursday's market close, shares had fallen 7.3%.

Talen is one of many independent power producers that saw share prices of its stock surge in the last year, as Wall Street hyped up companies it saw as well-positioned to benefit from Big Tech's massive AI data center buildout. Even after Thursday's dip, Talen's stock price has more than doubled since this time last year.

Shares of Vistra, another independent power producer that has seen its stock surge amid the AI boom, also fell today after it reported earnings.

While McFarland tried to reassure investors, not all of them were convinced.

"We're clearly getting anxious," Seaport Global analyst Angie Storozynski told Talen executives on the call.

Do you have a story to share about data centers and energy? Contact this reporter at [email protected].

Read the original article on Business Insider

Amazon loves AI, except when candidates use it in their job interviews

27 February 2025 at 12:11
Person staring at computer screen with robot standing behind with a poster of words
Β 

PhonlamaiPhoto/Getty, SDI Productions/Getty, Ava Horton/BI

  • Amazon is cracking down on the use of AI tools in job interviews.
  • AI-assisted interviews pose ethical challenges and have sparked debate in Silicon Valley.
  • Some Amazon employees consider AI tools beneficial, while others see them as dishonest.

Generative AI tools like coding assistants and "teleprompter" apps feed people live answers during job interviews, giving a leg up to candidates looking for an edge.

Amazon, one of the largest employers in the world, wants to curb this growing trend.

Recent Amazon guidelines shared with internal recruiters at the company say that job applicants can be disqualified from the hiring process if they are found to have used an AI tool during job interviews.

Amazon believes the use of AI tools in interviews gives candidates an "unfair advantage" and prevents the company from evaluating their "authentic" skills and experiences, the guidelines, which were obtained by Business Insider, say.

"To ensure a fair and transparent recruitment process, please do not use GenAl tools during your interview unless explicitly permitted," the guidelines say. "Failure to adhere to these guidelines may result in disqualification from the recruitment process."

The guidelines also tell Amazon recruiters to share these rules with job candidates.

The crackdown highlights one of the many ethical challenges that are bubbling up from the rise of generative artificial intelligence. Amazon has restricted employee use of AI tools such as ChatGPT, even as it encourages them to employ internal AI apps to boost productivity. "Hacking" job interviews with AI is a growing trend, prompting debate across Silicon Valley.

In a recent internal Slack conversation seen by BI, some Amazon employees debated the need to ban AI tools during job interviews when they can improve the quality of work.

"This is certainly an increasing trend, especially for tech/SDE roles," one of the Slack messages said, referring to software development engineers.

An Amazon spokesperson said the company's recruiting process "prioritizes ensuring that candidates hold a high bar."

When applicable, candidates must acknowledge that they won't use "unauthorized tools, like GenAI, to support them" during an interview, the spokesperson added in an email to BI.

Tips to identify the use of gen AI tools

The trend has become a big enough problem for Amazon that it has even shared internal tips on how to spot applicants using gen AI tools during interviews.

The indicators, the guidelines say, include:

The candidate can be seen typing whilst being asked questions. (Note, it is not uncommon for candidates to write down/type the question asked as they prepare to answer.)
The candidate appears to be reading their answers rather than responding naturally. This could include correcting themselves when they misread a word.
The candidate's eyes appear to be tracking text or looking elsewhere, rather than viewing their primary display or moving naturally during conversation.
The candidate delivers confident responses that do not clearly or directly address the question.
The candidate reacts to the outputs of the AI tool when they appear to be incorrect or irrelevant. This is often demonstrated by the candidate being distracted or confused as they are trying to make sense of the outputs.

While candidates are permitted to talk about how they have used generative AI applications to "achieve efficiencies" in their current or previous roles, they're strictly prohibited from using them during job interviews, the Amazon guidelines add.

A recent video produced by an AI company that claims to have received a job offer from Amazon after using its coding assistant during an interview raised alarms internally, one person familiar with the matter told BI. This person asked not to be identified because they were not authorized to speak with the media.

'Mainstream' problem

This is not just an Amazon problem. Job seekers are becoming increasingly bold in interviews, using different AI tools. A recent experiment found it was easy to cheat in job interviews using AI tools like ChatGPT.

In October, the xAI cofounder Greg Yang wrote on X that he'd caught a job candidate cheating with Anthropic's Claude AI service.

"The candidate tried to use claude during the interview but it was way too obvious," Yang wrote.

Matthew Bidwell, a business professor at the Wharton School, told BI that these AI tools "definitely penetrated the mainstream, and employers are worried about it," citing conversations with students in his executive-management program.

Bidwell said it's a problem when employers can't detect these tools and the job candidates are uncomfortable acknowledging their use.

"There's a strong risk of people using it to misrepresent their skills, and I think that is somewhat unethical," Bidwell said.

Bar raising?

Not everyone is opposed to it. Some Silicon Valley companies are open to allowing these apps in job interviews because they already use them at work. Others are making the technical interview an open-book test but adding questions for a deeper assessment.

Some Amazon employees appear less concerned about it, too.

One person wrote in a recent Slack conversation, seen by BI, at Amazon that their team was "studying" the possibility of providing a generative AI assistant to candidates and changing their hiring approach. Another person said that even if a candidate got hired after using these tools, Amazon had "other mechanisms" to address those who do not meet expectations for their roles.

A third person questioned whether Amazon could benefit from this. Using generative AI may be "dishonest or unprofessional," this person said, but on the other hand, it's "raising the bar" for Amazon by improving the quality of the interview.

"If judged solely by the outcome, it could be considered bar-raising," this person wrote.

Have a tip? Contact this reporter via email at [email protected] or Signal, Telegram, or WhatsApp at 650-942-3061. Use a personal email address and a nonwork device; here's our guide to sharing information securely.

Read the original article on Business Insider

Amazon's investments in Anthropic are now worth a cool $14 billion

21 February 2025 at 11:11
AWS CEO Matt Garman
AWS CEO Matt Garman

Amazon

  • The value of Amazon's investments in Anthropic have soared.
  • Amazon has agreed to invest $8 billion in Anthropic so far.
  • Anthropic and Amazon have a close partnership that includes an arrangement to use AWS's AI chips.

Amazon is seeing massive gains from its investment in Anthropic.

The cloud giant estimated the fair value of its stake in Anthropic at $14 billion at the end of December, according to a recent regulatory filing. That's up from $8 billion, the filing shows.

That means Amazon's investments have soared roughly 75% since it started backing the AI startup in 2023, for a cool $6 billion gain on paper.

Amazon first invested $1.25 billion in Anthropic in September 2023 and another $2.75 billion in the first quarter of 2024. Late last year, Amazon put in another $1.3 billion, and also agreed to invest an additional $2.7 billion by the end of 2025, the filing stated.

Amazon's spokesperson declined to comment.

In the filing, Amazon said it used convertible notes to invest in Anthropic. These are classified as "available-for-sale," an accounting term generally denoting securities expected to be held for more than a year. Convertible notes can be exchanged for equity in the future, depending on how they are structured and certain thresholds.

There are a lot of assumptions baked into Amazon's fair value estimates. This is common for investments in startups, which are often young businesses that could either succeed or fail over the long term. Amazon's filing classified the convertible notes as "Level 3" assets, which use "valuations based on unobservable inputs reflecting our own assumptions, consistent with reasonably available assumptions made by other market participants."

"These valuations require significant judgment," the company added.

Anthropic is one of the leading AI startups, best known for its Claude family of models and related services. The OpenAI rival is currently raising money at a $60 billion valuation, a significant jump from the $18 billion value it saw last year, according to the Wall Street Journal.

Amazon has built a close relationship with Anthropic in recent years. In addition to the funding, Anthropic agreed to use Amazon's cloud computing services and AI chips. Last year, Anthropic said it plans to use a new AI supercomputer made up of Amazon-made chip clusters.

Amazon's total investment value in public and private companies was $22.1 billion as of the end of December, the filing said. Publicly traded companies, such as Rivian, accounted for just $4.6 billion. Among the private companies Amazon invested in are Scale AI, Hugging Face, and X-energy, according to Pitchbook.

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

Amazon makes Zoom the 'standard' app for internal meetings and starts using Microsoft 365 tools, internal memo shows

19 February 2025 at 15:46
Amazon's Seattle office inside glass spheres.

Lindsey Wasson/Reuters

  • Amazon is adopting Zoom as its official meeting app, replacing its own Chime.
  • Chime, launched in 2017, was Amazon's main meeting app and had limited external use.
  • Amazon has also started rolling out Microsoft's 365 tools.

Amazon is making Zoom its official meeting application across the company, according to an internal memo obtained by Business Insider.

Zoom replaces Chime, Amazon's homegrown meeting application that can run video and audio calls, the memo said. Chime, which launched in 2017, had been the de facto official meeting app for Amazon's corporate employees.

"Zoom is replacing Amazon Chime as the standard meeting application for Amazon internal meetings," the memo said.

Amazon's companywide use of Zoom, which hasn't been previously reported, is a major win for the videoconferencing app maker, which lost most of the pandemic-driven stock price gains in recent years.

Amazon will deprecate the Chime application, partly because of its "limited" external use, the company's spokesperson said in an email to BI.

"When we decide to retire a service or feature, it is typically because we've introduced something better or our partners offer a solution that is a good fit for our customers as well as our own employees. In Chime's case, its use outside of Amazon was limited, and our partners offer great collaboration solutions, so we will lean into those," the spokesperson said.

Zoom's spokesperson didn't respond to a request for comment.

Amazon has also started rolling out Microsoft's 365 productivity tools internally, the memo said. Amazon has committed to spending $1 billion over five years to use M365, BI previously reported.

Amazon employees will migrate to Microsoft's cloud applications on a "rolling basis," the memo said, and they will gain access to services such as Outlook, Word, Excel, and PowerPoint.

The internal memo added that Microsoft Teams would be available for meetings where "full integration with M365 is needed," while Cisco Webex would work for meetings with customers who use Cisco's product.

In a separate blog post, Amazon wrote that Wednesday's change wouldn't impact Chime's software development kit, which lets customers build their communication applications.

Chime is the second product shutdown Amazon has announced this week. On Tuesday, the company said it was scrapping Inspire, a TikTok-style video and photo feed.

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

Amazon CEO says cloud business would have grown faster if it had more AI chips, power, and server components

6 February 2025 at 16:08
Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Amazon

  • Amazon Web Services growth hindered by capacity constraints in data centers, says CEO.
  • Constraints stem from AI chip shortages, server components, and energy supply issues.
  • Amazon plans $105 billion in 2025 capital expenditures, largely led by AI.

Amazon CEO Andy Jassy said on Thursday that the Amazon Web Services cloud business could grow faster if not for "capacity constraints" across its data centers.

He said the shortage has been caused by difficulty procuring AI chips, server components like motherboards, and the energy to power data centers.

"It is true that we could be growing faster, if not for some of the constraints on capacity," Jassy said during Thursday's call with analysts.

On Thursday, AWS reported a 19% increase in sales for the fourth quarter at $28.8 billion, which was slightly below street estimates. Amazon's stock dropped roughly 4% in after-hours as the company gave lower-than-anticipated first-quarter guidance.

Jassy's remarks echo recent statements made by cloud rivals Microsoft and Google. Microsoft's CFO Amy Hood said last week that the company is in "a pretty constrained capacity place" when it comes to meeting demand, while Google's leadership said on Tuesday that it ended 2024 with "more (AI) demand than capacity."

Jassy said on Thursday that he expects the constraints to "relax" in the second half of 2025, adding it is "hard to complain" when AWS's AI business is on pace to generate "multi-billion" dollars in annual sales.

Amazon expects AI demand to continue growing. For 2025, the company forecast roughly $105 billion in capital expenditures, mostly in data centers, after spending a record $26.3 billion during the fourth-quarter.

Jassy said AWS doesn't make that kind of financial commitment unless there are "significant signals of demand."

"When AWS is expanding its capex, particularly what we think is one of these once-in-a-lifetime type of business opportunities like AI represents, I think it's actually quite a good sign, medium to long term for the AWS business," Jassy said.

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

In a recent internal meeting, an Amazon VP said the company is committed to DEI despite some signs of a pullback

4 February 2025 at 13:57
Mai-Lan Tomsen Bukovec by a staircase
AWS VP of technology Mai-Lan Tomsen Bukovec

Lucas Jackson/Amazon

  • Amazon recently changed some of its DEI websites and halted some programs.
  • An AWS VP clarified Amazon's continued commitment to DEI in a meeting in late January.
  • Other firms such as Meta and Target recently pulled back on DEI efforts.

An Amazon executive told employees the company remains committed to diversity, equity, and inclusion, during a recent internal meeting.

The tech giant has changed some of its websites about DEI and halted some programs. That prompted some employees to ask about this ahead of the meeting in late January with Mai-Lan Tomsen Bukovec, VP of technology at Amazon Web Services.

The question about DEI received the highest interest among employees, Tomsen Bukovec noted, according to a meeting transcript obtained by Business Insider.

The AWS executive said she wasn't aware of the changes to Amazon's DEI websites. She said she asked internally to ensure that Amazon was still committed to its DEI programs. The feedback she received was "no change," she said.

"We are not pulling back on DEI initiatives. I looked at all the changes. We are not making any changes to any of the benefits," Tomsen Bukovec added. "There's no change to the commitment, but we didn't roll it out that well."

Tomsen Bukovec, who's been at AWS for almost 15 years, is a senior, high-profile executive at the cloud division. She often speaks on behalf of the company at public events and was one of the speakers at last year's re:Invent conference. In 2023, Tomsen Bukovec was included in BI's AI 100 list.

Other companies, including Meta, Target, and McDonald's, have pulled back or ended DEI programs in recent months as pressure from conservative activists and Donald Trump's administration increased. At the same time, other companies, such as JPMorgan and Costco, are continuing support for DEI.

Media reports in recent months have suggested that Amazon's approach to DEI may be changing.

In December, Candi Castleberry, Amazon's VP of inclusive experiences and technology, said the company was shutting down several "outdated" DEI programs to focus on those with "proven outcomes," according to Bloomberg. Amazon also recently removed or changed some words about its DEI benefits on some of its websites, The Information previously reported.

During the late January staff meeting, Tomsen Bukovec said Amazon consolidated some words and dropped some paragraphs from its DEI websites and blogs. But the company "didn't change the underlying capabilities that people can do, and we did not change any of the commitment that we have to the programs," she said.

She specifically addressed a transgender benefit that Amazon offers and said "we are not making any changes to that."

"We just made some changes to the words, and then I think we deprioritized one or two programs that were not making much of a difference anyway," she added.

Amazon's spokesperson declined to comment on the specifics of this story but shared a link to the company's policy positions that show its commitment "to creating a diverse and inclusive" culture.

The spokesperson also shared a copy of Castleberry's December email that said, "We remain dedicated to delivering inclusive experiences for customers, employees, and communities around the world."

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

Inside Amazon's plan to cut managers: More direct reports, fewer senior hires, and pay cuts

30 January 2025 at 02:00
Someone trying to climb up the Amazon logo and falling down

iStock; Rebecca Zisser/BI

  • Amazon wants fewer managers.
  • Internal guidelines for a big sales team at AWS give a glimpse into its plans.
  • Amazon's reorg reflects a broader trend in corporate America to trim middle managers.

Amazon's effort to whittle down middle-management is taking shape.

The company recently told some managers to increase their direct reports, make fewer senior hires, and down-level or cut pay for some employees, according to people familiar with the matter and internal guidelines shared with a large sales team at Amazon Web Services.

In September, Amazon CEO Andy Jassy announced a plan to increase the ratio of individual contributors to managers by 15% by the end of March. By reducing management layers, the company hopes to "decrease bureaucracy" and "move fast," Jassy said at the time.

The internal guidelines give an early glimpse into how Amazon intends to complete a management shake-up that could impact thousands of corporate employees. Amazon hasn't shared anything publicly since unveiling plans to have fewer managers last year.

In an email to BI, Amazon's spokesperson said the internal guidance document might be true for a specific team but not for the whole company. Individual units communicate directly with employees as they make changes to their structures under the broad mandate of creating "customer-centric, agile organizations that empower fast decision-making," the spokesperson added.

Minimum 8 direct reports

Amazon CEO Andy Jassy
Amazon CEO Andy Jassy

Amazon

One key point in the AWS team guidelines document obtained by BI is to put at least 8 direct reports under each manager. The new "span of control" directive is an increase from a minimum of 6 direct reports each manager had in the past, according to the guidelines said.

It's not the first time Amazon has made changes like this. In 2017, Amazon founder and former CEO Jeff Bezos asked every manager to have at least 6 direct reports as part of a plan to "de-layer" the company.

The latest mandate has made some employees concerned enough to ask questions about it. At an internal all-hands meeting in November, Jassy addressed the issue, saying Amazon went on a massive hiring spree during the pandemic, which "stretched" the company and led to slower decision-making, BI previously reported.

"I hate bureaucracy," Jassy said during the meeting.

Amazon's spokesperson told BI that the ideal team size will vary, and no companywide mandate requires all managers to have a certain number of direct reports. The spokesperson added that a full-time employee with one or more direct reports is considered a manager "in title," regardless of the number of direct reports they have.

Pause hiring new managers

AWS CEO Matt Garman
AWS CEO Matt Garman

Amazon

The guidelines for the AWS sales team also require pausing new manager hires. This is temporary until the team understands the full ramifications of the organizational change, the guidelines said.

This particular AWS team has discussed hiring fewer middle managers since at least April of last year, according to another internal document obtained by BI. The team found that the hiring pace of middle managers had outpaced entry-level employees in recent years, resulting in increased costs. The document recommended hiring more early-career professionals to shift the team's structure from a "diamond"-shaped organization to a "pyramid" shape where more than half the team is concentrated in lower-level positions.

This trend is happening in other parts of corporate America. Besides Amazon, companies including Meta, Citi, and UPS have made changes to trim supervisory roles. Data from last year showed that companies are cutting middle managers and not backfilling those positions.

Another Amazon internal guideline from September, previously obtained by BI, said the manager reduction plan could result in role eliminations as "organizations may identify roles that are no longer required." In a note published last week, Bank of America analysts estimated that Amazon could save roughly $1.5 billion in annual costs from the manager cuts.

Amazon's spokesperson said the company adjusts hiring based on business needs, and it continues to have open manager roles available. The spokesperson added that there are many ways to reduce the number of managers without terminating them, such as by reconfiguring teams or reassigning employees.

Down leveling

Another aspect of the plan is to down-level some of the managers to individual contributor roles, according to the recent guidelines for the AWS sales team. Two current AWS employees told BI that several managers have been pushed down a level due to the new approach. That meant being moved to a smaller pay band, one of the people said.

One former employee who left recently said the promotion criteria are changing. This person said some AWS teams now require managers to have more people under them to qualify for a promotion.

Amazon's spokesperson told BI that moving from an individual contributor to a manager role, or vice versa, can happen without changing levels. There's no companywide requirement for team size to get promoted as the promotion criteria involve many factors, the spokesperson added.

For several Amazon employees who spoke to BI, the reorganization is creating a bigger problem: a culture of fear. They said managers seem to shy away from taking risks or making hard decisions because they don't want to be held accountable for failures, which could make them a target of the cuts.

"No one wants to be the one that failed," one of the people said.

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

Big AWS customers, including Stripe and Toyota, are hounding the cloud giant for access to DeepSeek AI models

27 January 2025 at 15:00
AWS CEO Matt Garman
AWS CEO Matt Garman

Amazon

  • DeepSeek's AI models have taken the tech industry by storm in recent days.
  • More than 20 big AWS customers have asked Amazon for access to DeepSeek models: internal document.
  • AWS's strategy focuses on offering diverse AI models, unlike competitors that prioritize their own.

Big Amazon cloud customers have been pressing the tech giant to give them access to DeepSeek's AI models, the latest sign of the Chinese startup taking the tech world by storm.

More than 20 key clients of Amazon Web Services asked the company to make DeepSeek models available through Amazon's Bedrock AI development tool this weekend, according to an internal document obtained by Business Insider.

Toyota, Stripe, Cisco, Yelp, and Workday were among AWS customers asking for this access, with many wanting to test and evaluate DeepSeek's AI capabilities internally. Other companies that made similar requests include Mercado Libre and Kellogg, the document showed.

An Amazon spokesperson told BI that Bedrock customers use multiple models to meet their unique needs and the company remains focused on "providing our customers with choice."

"We are always listening to customers to bring the latest emerging and popular models to AWS," the spokesperson said.

Spokespeople for Stripe, Cisco, Yelp, Workday, Toyota, Mercado Libre, and Kellogg didn't respond to requests for comment.

DeepSeek recently rolled out AI models that are on par with, or better than, some of Silicon Valley's top offerings β€” at a fraction of the cost. Its cheap pricing, strong performance, and compute-efficiency have raised questions about US tech companies' massive spending on competing products.

Tech stocks, including Nvidia, Broadcom, and TSMC, plunged on Monday as investors tried to assess the long-term implications of DeepSeek's initial success.

Amazon shares dropped early on Monday trading, but rallied during the day to end up 0.2%.

The moves highlight Amazon's strategic advantage in the generative AI race. From early on, AWS focused on providing customers with as many AI models as possible through Bedrock, believing that no one model would dominate the market.

That's a contrast to other tech companies, such as OpenAI and Google, which have spent heavily on building their own frontier AI models.

AWS still has an internal AGI team developing its own AI models, and the company unveiled the latest version, Nova, in December. However, Amazon has mostly prioritized offering a range of other AI models through the cloud.

Amazon often makes decisions based on customer feedback, and the company is likely considering making DeepSeek's models available through Bedrock after such a flood of client requests, according to a person familiar with the matter.

One AWS employee told BI that the company is not in "panic" mode over DeepSeek like some other tech companies. If DeepSeek's models are good, "we'll just host it on Bedrock," this person said. They asked not to be identified discussing private matters.

"We expect to see many more models like this β€” both large and small, proprietary and open-source β€” excel at different tasks," the Amazon spokesperson told BI, while noting that customers can access some DeepSeek-related products on AWS through tools such as Bedrock.

Do you work at Amazon? Got a tip?

Contact the reporter Eugene Kim via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Insider's source guide for other tips on sharing information securely.

Read the original article on Business Insider

❌
❌