Broadcom's stock surged in recent weeks, pushing the company's market value over $1 trillion.
Broadcom is crucial for companies seeking alternatives to Nvidia's AI chip dominance.
Custom AI chips are gaining traction, enhancing tech firms' bargaining power, analysts say.
The rise of AI, and the computing power it requires, is bringing all kinds of previously under-the-radar companies into the limelight. This week it's Broadcom.
Broadcom's stock has soared since late last week, catapulting the company into the $1 trillion market cap club. The boost came from a blockbuster earnings report in which custom AI chip revenue grew 220% compared to last year.
In addition to selling lots of parts and components for data centers, Broadcom designs and sells ASICs, or application-specific integrated circuits β an industry acronym meaning custom chips.
Designers of custom AI chips, chief among them Broadcom and Marvell, are headed into a growth phase, according to Morgan Stanley.
Custom chips are picking up speed
The biggest players in AI buy a lot of chips from Nvidia, the $3 trillion giant with an estimated 90% of market share of advanced AI chips.
Heavily relying on one supplier isn't a comfortable position for any company, though, and many large Nvidia customers are also developing their own chips. Most tech companies don't have large teams of silicon and hardware experts in house. Of the companies they might turn to design them a custom chip, Broadcom is the leader.
Though multi-purpose chips like Nvidia's and AMD's graphics processing units are likely to maintain the largest share of the AI chip market in the long-term, custom chips are growing fast.
Morgan Stanley analysts this week forecast the market for ASICs to nearly double to $22 billion next year.
Much of that growth is attributable to Amazon Web Services' Trainium AI chip, according to Morgan Stanley analysts. Then there are Google's in-house AI chips, known as TPUs, which Broadcom helps make.
In terms of actual value of chips in use, Amazon and Google dominate. But OpenAI, Apple, and TikTok parent company ByteDance are all reportedly developing chips with Broadcom, too.
ASICs bring bargaining power
Custom chips can offer more value, in terms of the performance you get for the cost, according to Morgan Stanley's research.
ASICs can also be designed to perfectly match unique internal workloads for tech companies, accord to the bank's analysts. The better these custom chips get, the more bargaining power they may provide when tech companies are negotiating with Nvidia over buying GPUs. But this will take time, the analysts wrote.
In addition to Broadcom, Silicon Valley neighbor Marvell is making gains in the ASICs market, along with Asia-based players Alchip Technologies and Mediatek, they added in a note to investors.
Analysts don't expect custom chips to ever fully replace Nvidia GPUs, but without them, cloud service providers like AWS, Microsoft, and Google would have much less bargaining power against Nvidia.
"Over the long term, if they execute well, cloud service providers may enjoy greater bargaining power in AI semi procurement with their own custom silicon," the Morgan Stanley analysts explained.
Nvidia's big R&D budget
This may not be all bad news for Nvidia. A $22 billion ASICs market is smaller than Nvidia's revenue for just one quarter.
Nvidia's R&D budget is massive, and many analysts are confident in its ability to stay at the bleeding edge of AI computing.
And as Nvidia rolls out new, more advanced GPUs, its older offerings get cheaper and potentially more competitive with ASICs.
"We believe the cadence of ASICs needs to accelerate to stay competitive to GPUs," the Morgan Stanley analysts wrote.
Still, Broadcom and chip manufacturers on the supply chain rung beneath, such as TSMC, are likely to get a boost every time a giant cloud company orders up another custom AI chip.
AWS's new AI chips aren't meant to go after Nvidia's lunch, said Gadi Hutt, a senior director of customer and product engineering at the company's chip-designing subsidiary, Annapurna Labs. The goal is to give customers a lower-cost option, as the market is big enough for multiple vendors, Hutt told Business Insider in an interview at AWS's re:Invent conference.
"It's not about unseating Nvidia," Hutt said, adding, "It's really about giving customers choices."
AWS has spent tens of billions of dollars on generative AI. This week the company unveiled its most advanced AI chip, called Trainium 2, which can cost roughly 40% less than Nvidia's GPUs, and a new supercomputer cluster using the chips, called Project Rainier. Earlier versions of AWS's AI chips had mixed results.
Hutt insists this isn't a competition but a joint effort to grow the overall size of the market. The customer profiles and AI workloads they target are also different. He added that Nvidia's GPUs would remain dominant for the foreseeable future.
In the interview, Hutt discussed AWS's partnership with Anthropic, which is set to be Project Rainer's first customer. The two companies have worked closely over the past year, and Amazon recently invested an additional $4 billion in the AI startup.
He also shared his thoughts on AWS's partnership with Intel, whose CEO, Pat Gelsinger, just retired. He said AWS would continue to work with the struggling chip giant because customer demand for Intel's server chips remained high.
Last year AWS said it was considering selling AMD's new AI chips. But Huttsaidthose chips still weren't available on AWS because customers hadn't shown strong demand.
This Q&A has been edited for clarity and length.
There have been a lot of headlines saying Amazon is out to get Nvidia with its new AI chips. Can you talk about that?
I usually look at these headlines, and I giggle a bit because, really, it's not about unseating Nvidia. Nvidia is a very important partner for us. It's really about giving customers choices.
We have a lot of work ahead of us to ensure that we continuously give more customers the ability to use these chips. And Nvidia is not going anywhere. They have a good solution and a solid road map. We just announced the P6 instances [AWS servers with Nvidia's latest Blackwell GPUs], so there's a continuous investment in the Nvidia product line as well. It's really to give customers options. Nothing more.
Nvidia is a great supplier of AWS, and our customers love Nvidia. I would not discount Nvidia in any way, shape, or form.
So you want to see Nvidia's use case increase on AWS?
If customers believe that's the way they need to go, then they'll do it. Of course, if it's good for customers, it's good for us.
The market is very big, so there's room for multiple vendors here. We're not forcing anybody to use those chips, but we're working very hard to ensure that our major tenets, which are high performance and lower cost, will materialize to benefit our customers.
Does it mean AWS is OK being in second place?
It's not a competition. There's no machine-learning award ceremony every year.
In the case of a customer like Anthropic, there's very clear scientific evidence that larger compute infrastructure allows you to build larger models with more data. And if you do that, you get higher accuracy and more performance.
Our ability to scale capacity to hundreds of thousands of Trainium 2 chips gives them the opportunity to innovate on something they couldn't have done before. They get a 5x boost in productivity.
Is being No. 1 important?
The market is big enough. No. 2 is a very good position to be in.
I'm not saying I'm No. 2 or No. 1, by the way. But it's really not something I'm even thinking about. We're so early in our journey here in machine learning in general, the industry in general, and also on the chips specifically, we're just heads down serving customers like Anthropic, Apple, and all the others.
We're not even doing competitive analysis with Nvidia. I'm not running benchmarks against Nvidia. I don't need to.
For example, there's MLPerf, an industry performance benchmark. Companies that participate in MLPerf have performance engineers working just to improve MLPerf numbers.
That's completely a distraction for us. We're not participating in that because we don't want to waste time on a benchmark that isn't customer-focused.
On the surface, it seems like helping companies grow on AWS isn't always beneficial for AWS's own products because you're competing with them.
We are the same company that is the best place Netflix is running on, and we also have Prime Video. It's part of our culture.
I will say that there are a lot of customers that are still on GPUs. A lot of customers love GPUs, and they have no intention to move to Trainium anytime soon. And that's fine, because, again, we're giving them the options and they decide what they want to do.
Do you see these AI tools becoming more commoditized in the future?
I really hope so.
When we started this in 2016, the problem was that there was no operating system for machine learning. So we really had to invent all the tools that go around these chips to make them work for our customers as seamlessly as possible.
If machine learning becomes commoditized on the software and hardware sides, it's a good thing for everybody. It means that it's easier to use those solutions. But running machine learning meaningfully is still an art.
What are some of the different types of workloads customers might want to run on GPUs versus Trainium?
GPUs are more of a general-purpose processor of machine learning. All the researchers and data scientists in the world know how to use Nvidia pretty well. If you invent something new, if you do that on GPU, then things will work.
If you invent something new on specialized chips, you'll have to either ensure compiler technology understands what you just built or create your own compute kernel for that workload. We're focused mainly on use cases where our customers tell us, "Hey, this is what we need." Usually the customers we get are the ones that are seeing increased costs as an issue and are trying to look for alternatives.
So the most advanced workloads are usually reserved for Nvidia chips?
Usually. If data-science folks need to continuously run experiments, they'll probably do that on a GPU cluster. When they know what they want to do, that's where they have more options. That's where Trainium really shines, because it gives high performance at a lower cost.
AWS CEO Matt Garman previously said the vast majority of workloads will continue to be on Nvidia.
It makes sense. We give value to customers who have a large spend and are trying to see how they can control the costs a bit better. When Matt says the majority of the workloads, it means medical imaging, speech recognition, weather forecasting, and all sorts of workloads that we're not really focused on right now because we have large customers who ask us to do bigger things. So that statement is 100% correct.
In a nutshell, we want to continue to be the best place for GPUs and, of course, Trainium when customers need it.
What has Anthropic done to help AWS in the AI space?
They have very strong opinions of what they need, and they come back to us and say, "Hey, can we add feature A to your future chip?" It's a dialogue. Some ideas they came up with weren't feasible to even implement in a piece of silicon. We actually implemented some ideas, and for others we came back with a better solution.
Because they're such experts in building foundation models, this really helps us home in on building chips that are really good at what they do.
We just announced Project Rainier together. This is someone who wants to use a lot of those chips as fast as possible. It's not an idea β we're actually building it.
Can you talk about Intel? AWS's Graviton chips are replacing a lot of Intel chips at AWS data centers.
I'll correct you here. Graviton is not replacing x86. It's not like we're yanking out x86 and putting Graviton in place. But again, following customer demand, more than 50% of our recent landings on CPUs were Graviton.
It means that the customer demand for Graviton is growing. But we're still selling a lot of x86 cores too for our customers, and we think we're the best place to do that. We're not competing with these companies, but we're treating them as good suppliers, and we have a lot of business to do together.
How important is Intel going forward?
They will for sure continue to be a great partner for AWS. There are a lot of use cases that run really well on Intel cores. We're still deploying them. There's no intention to stop. It's really following customer demand.
Is AWS still considering selling AMD's AI chips?
AMD is a great partner for AWS. We sell a lot of AMD CPUs to customers as instances.
The machine-learning product line is always under consideration. If customers strongly indicate that they need it, then there's no reason not to deploy it.
And you're not seeing that yet for AMD's AI chips?
Not yet.
How supportive are Amazon CEO Andy Jassy and Garman of the AI chip business?
They're very supportive. We meet them on a regular basis. There's a lot of focus across leadership in the company to make sure that the customers who need ML solutions get them.
There's also a lot of collaboration within the company with science and service teams that are building solutions on those chips. Other teams within Amazon, like Rufus, the AI assistant available to all Amazon customers, run entirely on Inferentia and Trainium chips.
Do you work at Amazon? Got a tip?
Contact the reporter, Eugene Kim, via the encrypted-messaging apps Signal or Telegram (+1-650-942-3061) or email ([email protected]). Reach out using a nonwork device. Check out Business Insider's source guide for other tips on sharing information securely.
In what has become a bit of an annual tradition, I sat down with Amazon CTO Werner Vogels at AWS re:Invent this week. Another annual tradition now is that Vogels, who joined Amazon in 2004, publishes a series of predictions for the next year. Itβd be easy to think that this yearβs predictions are all [β¦]
AWS said developers spend most of their time on non-coding tasks, impacting productivity.
It introduced Amazon Q Developer β an AI agent to aid developers β at the re:Invent keynote on Tuesday.
But junior engineers are concerned AI tools like Amazon Q could reduce coder demand.
Artificial intelligence could give coders more time to code. Programmers aren't sure whether that's a good thing.
In a post Tuesday, Amazon Web Services said developers report spending an average of "just one hour per day" on actual coding.
The rest is eaten up by "tedious, undifferentiated tasks," AWS said. That includes learning codebases, drafting documents, testing, overseeing releases, fixing problems, or hunting down vulnerabilities, AWS said. The company didn't say where it got the data.
AWS CEO Matt Garman spoke to the developers in the audience at the company's re:Invent keynote on Tuesday, introducing a tool he said would give them more time to focus on creativity. Amazon Q Developer is an AI agent that AWS is rolling out in two tiers with free and paid options.
The announcement is another indication that technology like AI could upend the way many coders do their jobs. Some have argued that AI will remove some of the tedium from tasks like creating documentation and generating basic code. That could be great for coders' productivity β and perhaps for their enjoyment of the jobs β yet it could also mean employers need fewer of them.
GitLab has reported that developers spend more than 75% of their time on tasks other than coding. Several veteran software engineers previously told BI that the time they spend coding is perhaps closer to half.
Software engineers on job forums like Blind are discussing how much they should rely on an AI assistant for their work. Some have asked for recommendations for the best agent, and receive mixed replies of "your own brain" and genuine reviews. Others worry that AI has already become a crutch in their coding process.
AWS isn't the only tech giant offering AI to coders. Google CEO Sundar Pichai recently said that AI generates more than a quarter of the new code created at the search company. He said the technology was "boosting productivity and efficiency." Workers review the code that AI produces, Pichai said.
"This helps our engineers do more and move faster," he said. "I'm energized by our progress and the opportunities ahead, and we continue to be laser-focused on building great products."
The rise of AI could be worrisome for newbie programmers who need to develop their skills, according to Jesal Gadhia, head of engineering at Thoughtful AI, which creates AI tools for healthcare providers.
"Junior engineers," Gadhia previously told BI, "have a little bit of a target behind their back."
He said that when an AI tool touted as the "first AI software engineer" came out this year, he received texts from nervous friends.
"There was a lot of panic. I had a lot of friends of mine who messaged me and said, 'Hey, am I going to lose my job?'" Gadhia said.
As businesses move from trying out generative AI in limited prototypes to putting them into production, they are becoming increasingly price conscious. Using large language models (LLMs) isnβt cheap, after all. One way to reduce cost is to go back to an old concept: caching. Another is to route simpler queries to smaller, more cost-efficient [β¦]
At last yearβs AWS re:Invent conference, Amazonβs cloud computing unit launched SageMaker HyperPod, a platform for building foundation models. Itβs no surprise, then, that at this yearβs re:Invent, the company is announcing a number of updates to the platform, with a focus on making model training and fine-tuning on HyperPod more efficient and cost-effective for [β¦]
SageMaker has long been AWSβ fully managed platform for building, training, and deploying machine learning and generative AI models. Over time, however, an ecosystem of applications has sprung up around AI and ML models for performing tasks like managing experiments, evaluating model quality, and security. Those always lived outside of SageMaker and had to be [β¦]
AWS unveiled a new AI chip and a supercomputer at its Re: Invent conference on Tuesday.
It's a sign that Amazon is ready to reduce its reliance on Nvidia for AI chips.
Amazon isn't alone: Google, Microsoft, and OpenAI are also designing their own AI chips.
Big Tech's next AI era will be all about controlling silicon and supercomputers of their own. Just ask Amazon.
At its Re: Invent conference on Tuesday, the tech giant's cloud computing unit, Amazon Web Services, unveiled the next line of its AI chips, Trainium3, while announcing a new supercomputer that will be built with its own chips to serve its AI ambitions.
It marks a significant shift from the status quo that has defined the generative AI boom since OpenAI's release of ChatGPT, in which the tech world has relied on Nvidia to secure a supply of its industry-leading chips, known as GPUs, for training AI models in huge data centers.
While Nvidia has a formidable moat β experts say its hardware-software combination serves as a powerful vendor lock-in system β AWS' reveal shows companies are finding ways to take ownership of the tech shaping the next era of AI development.
Putting your own chips on the table
On the chip side, Amazon shared that Trainium2, which was first unveiled at last year's Re: Invent, was now generally available. Its big claim was that the chip offers "30-40% better price performance" than the current generation of servers with Nvidia GPUs.
That would mark a big step up from its first series of chips, which analysts at SemiAnalysis described on Tuesday as "underwhelming" for generative AI training and used instead for "training non-complex" workloads within Amazon, such as credit card fraud detection.
"With the release of Trainium2, Amazon has made a significant course correction and is on a path to eventually providing a competitive custom silicon," the SemiAnalysis researchers wrote.
Trainium3, which AWS gave a preview of ahead of a late 2025 release, has been billed as a "next-generation AI training chip." Servers loaded with Trainium3 chips offer four times greater performance than those packed with Trainium2 chips, AWS said.
Matt Garman, the CEO of AWS, told The Wall Street Journal that some of the company's chip push is due to there being "really only one choice on the GPU side" at present, given Nvidia's dominant place in the market. "We think that customers would appreciate having multiple choices," he said.
It's an observation that others in the industry have noted and responded to. Google has been busy designing its own chips that reduce its dependence on Nvidia, while OpenAI is reported to be exploring custom, in-house chip designs of its own.
But having in-house silicon is just one part of this.
The supercomputer advantage
AWS acknowledged that as AI models trained on GPUs continue to get bigger, they are "pushing the limits of compute and networking infrastructure."
With this in mind, AWS shared that it was working with Anthropic to build an "UltraCluster" of servers that form the basis of a supercomputer it has named Project Rainier. According to Amazon, it will scale model training across "hundreds of thousands of Trainium2 chips."
"When completed, it is expected to be the world's largest AI compute cluster reported to date available for Anthropic to build and deploy their future models on," AWS said in a blog, adding that it will be "over five times the size" of the cluster used to build Anthropic's last model.
The supercomputer push follows similar moves elsewhere. The Information first reported earlier this year that OpenAI and Microsoft were working together to build a $100 billion AI supercomputer called Stargate.
Of course, Nvidia is also in the supercomputer business and aims to make them a big part of its allure to companies looking to use its next-generation AI chips, Blackwell.
AWS made no secret that it remains tied to Nvidia for now. In an interview with The Wall Street Journal, Garman acknowledged that Nvidia is responsible for "99% of the workloads" for training AI models today and doesn't expect that to change anytime soon.
That said, Garman reckoned "Trainium can carve out a good niche" for itself. He'll be wise to recognize that everyone else is busy carving out a niche for themselves, too.
AWS announced plans for an AI supercomputer, UltraCluster, with Trainium 2 chips at re:Invent.
AWS may be able to reduce reliance on Nvidia by developing its own AI infrastructure.
Apple said it's using Trainium 2 chips for Apple Intelligence.
Matt Garman, the CEO of Amazon Web Services, made several significant new AWS announcements at the re:Invent conference on Tuesday.
His two-and-a-half hour keynote delved into AWS's current software and hardware offerings and updates, with words from clients including Apple and JPMorgan. Graphics processing units (GPUs), supercomputers, and a surprise Apple cameo stuck out among the slew of information.
AWS, the cloud computing arm of Amazon, has been developing its own semiconductors to train AI. On Tuesday, Garman said it's creating UltraServers β containing 64 of its Trainium 2 chips β so companies can scale up their GenAI workloads.
Moreover, it's also building an AI supercomputer, an UltraCluster made up of UltraServers, in partnership with AI startup Anthropic. Named Project Rainier, it will be "the world's largest AI compute cluster reported to date available for Anthropic to build and deploy its future models on" when completed, according to an Amazon blog post. Amazon has invested $8 billion in Anthropic.
Such strides could push AWS further into competition with other tech firms in the ongoing AI arms race, including AI chip giant Nvidia.
Here are four takeaways from Garman's full keynote on Tuesday.
AWS' Trainium chips could compete with Nvidia.
Nvidia currently dominates the AI chip market with its sought-after and pricey GPUs, but Garman backed AWS's homegrown silicon during his keynote on Tuesday. His company's goal is to reduce the cost of AI, he said.
"Today, there's really only one choice on the GPU side, and it's just Nvidia. We think that customers would appreciate having multiple choices," Garman told the Wall Street Journal.
AI is growing rapidly, and the demand for chips that make the technology possible is poised to grow alongside it. Major tech companies, like Google and Microsoft, are venturing into chip creation as well to find an alternative to Nvidia.
However, Garman told The Journal the doesn't expect Trainium to dethrone Nvidia "for a long time."
"But, hopefully, Trainium can carve out a good niche where I actually think it's going to be a great option for many workloads β not all workloads," he said.
AWS also introduced Trainium3, its next-gen chip.
AWS' new supercomputer could go toe to toe with Elon Musk's xAI.
According to The Journal, the chip cluster known as Project Rainier is expected to be available in 2025. Once it is ready, Anthropic plans to use it to train AI models.
With "hundreds of thousands" of Trainium chips, it would challenge Elon Musk's xAI's Colossus β a supercomputer with 100,000 of Nvidia's Hopper chips.
Apple is considering Trainium 2 for Apple Intelligence training.
Garman said that Apple is one of its customers using AWS chips, like Amazon Graviton and Inferentia, for services including Siri.
Benoit Dupin, senior director of AI and machine learning at Apple, then took to the stage at the Las Vegas conference. He said the company worked with AWS for "virtually all phases" of its AI and machine learning life cycle.
"One of the unique elements of Apple business is the scale at which we operate and the speed with which we innovate," Dupin said.
He added, "AWS has been able to keep the pace, and we've been customers for more than a decade."
Now, Dupin said Apple is in the early stages of testing Trainium 2 chips to potentially help train Apple Intelligence.
The company introduced a new generation of foundational models, Amazon Nova.
Amazon announced some new kids on the GenAI block.
AWS customers will be able to use Amazon Nova-powered GenAI applications "to understand videos, charts, and documents, or generate videos and other multimedia content," Amazon said. There are a range of models available at different costs, it said.
"Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are at least 75% less expensive than the best-performing models in their respective intelligence classes in Amazon Bedrock," Amazon said.
A year ago, AWS announced Q, its AI assistant platform for business users and developers. Q Developer is getting a wide range of updates today and so is Q Business. The focus for Q Business is on new integrations that can help businesses bring in more data from third-party tools, the ability for third-party platforms [β¦]
Itβs been close to a decade since Amazon Web Services (AWS), Amazonβs cloud computing division, announced SageMaker, its platform to create, train, and deploy AI models. While in previous years AWS has focused on greatly expanding SageMakerβs capabilities, this year, streamlining was the goal. At its re:Invent 2024 conference, AWS unveiled SageMaker Unified Studio, a [β¦]
GitLab, the popular developer and security platform, and AWS, the popular cloud computing and AI service, today announced that they have teamed up to combine GitLabβs Duo AI assistant with Amazonβs Q autonomous agents. The goal here, the two companies say, is to accelerate software innovation and developer productivity, and unlike so many partnerships in [β¦]
At its re:Invent conference on Tuesday, Amazon Web Services (AWS), Amazonβs cloud computing division, announced a new family of multimodal generative AI models it calls Nova. There are four text-generating models in total: Micro, Lite, Pro, and Premier. Micro, Lite, and Pro are available Tuesday to AWS customers, while Premier will arrive in early 2025, [β¦]
Amazon Web Services (AWS), Amazonβs cloud computing division, is launching a new tool to combat hallucinations β that is, scenarios where an AI model behaves unreliably. Announced at AWSβ re:Invent 2024 conference in Las Vegas, the service, Automated Reasoning checks, validates a modelβs responses by cross-referencing customer-supplied info for accuracy. (Yes, the word βchecksβ is [β¦]
At its re:Invent conference, Amazonβs AWS cloud computing unit today announced Amazon Aurora DSQL, a new serverless, distributed SQL database that promises high availability (99.999% for multi-region availability), strong consistency, PostgreSQL compatibility, and, the company says, β4x faster reads and writes compared to other popular distributed SQL databases.β AWS argues that Aurora DSQL will offer [β¦]
At its re:Invent conference, AWS today announced the general availably of its Trainium2 (T2) chips for training and deploying large language models (LLMs). These chips, which AWS first announced a year ago, will be four times as fast as their predecessors, with a single Trainium2-powered EC2 instance with 16 T2 chips providing up to 20.8 [β¦]
AWS recently laid out growth plans for 2025 in internal documents.
One of the initiatives is focused on working more with consulting firms.
Accenture was among several consulting firms mentioned by AWS.
Amazon Web Services wants to work more with consulting firms, including Accenture, part of a broader plan to spur growth in 2025, according to an internal planning document obtained by Business Insider.
AWS is looking to expand work with external partners that can sell its cloud services to hundreds of their existing customers. AWS sees an untapped market worth $250 billion and thousands of contracts up for renewal, the document explained.
Beyond Accenture, AWS mentioned Tata Consultancy, DXC Technology, and Atos as partners in the planning document.
AWS will prioritize these partners' existing customers and proactively reach out to them before contract-renewal time, and help the partners become "cloud-first," the document explained.
AWS pioneered cloud computing and still leads this huge and growing market. Over the years, the company has done a lot of work with customers through in-house cloud advisers. So the plan to expand its relationships with outside consulting firms is notable.
Ruba Borno is the VP leading the initiative, which will "review and prioritize partner's incumbent customers based on workloads and relationship," the document also stated.
Borno is a Cisco veteran who joined AWS a few years go to run its global channels and alliances operation, which works with more than 100,000 partners, including consulting firms and systems integrators and software vendors.
These plans are part of new AWS growth initiatives that include a focus on healthcare, business applications, generative AI, and the Middle East region, BI reported last week.
These are part of the AWS sales team's priorities for next year and Amazon refers to them internally as "AGIs," short for "AWS growth initiatives," one of the internal documents shows.
A spokesman for Tata Consultancy declined to comment. Spokespeople at Accenture did not respond to a request for comment.
Itβs AWS re:Invent 2024 this week, Amazonβs annual cloud computing extravaganza in Las Vegas, and as is tradition, the company has so much to announce, it canβt fit everything into its five (!) keynotes. Ahead of the showβs official opening, AWS on Monday detailed a number of updates to its overall data center strategy that [β¦]