Meta and AMD Strike Landmark $100 Billion AI Deal, Shaking the Chip Industry

In a sweeping multi-year agreement, Meta Platforms has committed to deploy up to 6 gigawatts of AMD AI chips — enough to power millions of homes — as it bets big on what Mark Zuckerberg calls “personal superintelligence.”

In a deal that sent shockwaves through Silicon Valley, Meta Platforms and Advanced Micro Devices announced on Tuesday a landmark multi-year partnership to deploy up to 6 gigawatts of AMD’s cutting-edge Instinct GPUs across Meta’s global network of AI data centers — a pact analysts estimate could be worth between $60 billion and $100 billion over five years, making it one of the largest technology supply agreements in history.

The announcement, made on February 24, 2026, comes just days after Meta separately committed to millions of Nvidia GPUs, signaling that the social media titan is deliberately building a multi-vendor AI hardware strategy rather than depending on any single supplier as it races to develop what CEO Mark Zuckerberg has described as “personal superintelligence.”

Deal at a Glance

$100B

Estimated total deal value over 5 years

6 GW

Gigawatts of AMD GPU compute to be deployed

160M

AMD shares Meta can acquire (~10% equity)

$135B

Meta’s total 2026 capital expenditure budget

The scope of the agreement is staggering by any measure. Six gigawatts of compute power is roughly equivalent to the electricity consumed by 4.5 million American homes. Shipments to support the first gigawatt deployment are scheduled to begin in the second half of 2026, powered by a custom AMD Instinct GPU built on the MI450 architecture and tailored specifically for Meta’s workloads, alongside AMD’s sixth-generation EPYC CPUs — codenamed “Venice” — all running on the ROCm software stack.

“We’re excited to form a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence. I expect AMD to be an important partner for many years to come.”— Mark Zuckerberg, Founder and CEO, Meta Platforms

Equity for Orders: An Unusual Financial Structure

Perhaps the most eye-catching element of the deal is its financial engineering. As part of the agreement, AMD has granted Meta performance-based warrants to acquire up to 160 million shares of AMD common stock — approximately 10% of the company — for just one cent per share. The warrants vest alongside specific shipment and deployment milestones, and the full award is conditional on AMD’s share price reaching $600, compared to its closing price of $196.60 on the day before the announcement.

The structure mirrors a near-identical arrangement AMD signed with OpenAI in October 2025, raising questions among investors about why AMD would effectively give away a large equity stake to win orders. AMD CFO Jean Hu acknowledged the deal would generate “significant double-digit billions of dollars per gigawatt” in data center AI revenue, and AMD CEO Dr. Lisa Su defended the warrant structure as creating aligned incentives that guarantee revenue visibility and help shape AMD’s own hardware roadmap.

“If you look at the structure of our warrants in this case, it’s a very aligned incentive structure,” Su told analysts on a Tuesday morning conference call. “It guarantees a certain level of earnings to show shareholders and helps AMD co-develop hardware for multiple generations.”

Markets responded swiftly. AMD shares surged nearly 9%, closing at approximately $214, recovering ground lost after Meta’s Nvidia announcement the prior week had temporarily weighed on AMD’s stock.

Meta’s Platform-Agnostic Gamble

For Meta, the AMD deal is the latest move in what company executives are calling the “Meta Compute initiative” — an effort to massively diversify its AI infrastructure and avoid over-dependence on any single chip provider. Having committed to Nvidia’s NVL72 rack-scale systems only weeks earlier, Meta is now building a dual-track strategy in which Nvidia hardware likely handles the most demanding frontier model training and cutting-edge inference tasks, while AMD’s custom chips take on large-scale inference and “personal superintelligence” workloads at lower cost per unit.

Chip analyst Ben Bajarin of Creative Strategies called the arrangement a logical move given the constraints of the current AI hardware market. “Meta is in a unique position to control the full stack and they can use whoever’s compute they want,” Bajarin said. “It’s just a punctuation point on the fact that we are compute constrained, and deals will be done across the board.”

The AMD deal is also notable for what it is not: a commodity chip purchase. The first deployment uses custom GPUs designed in close collaboration between the two companies, built on the AMD Helios rack-scale architecture that AMD and Meta jointly developed through the Open Compute Project. Bajarin noted that Meta’s demand for customized silicon — something Nvidia has not offered in comparable public deals — gave AMD a meaningful edge.

A Defining Moment for AMD — and a Warning Shot for Nvidia

For AMD, the strategic implications extend far beyond a single revenue line. The company’s Data Center segment generated $5.38 billion in revenue in the fourth quarter of 2025 alone, up 39% year-over-year, and full-year 2025 revenue reached $34.64 billion. The Meta agreement, layered on top of the existing OpenAI deal and a growing roster of hyperscaler customers, could significantly accelerate AMD’s push to close the gap with Nvidia, which controls roughly 90% of the AI accelerator market and commands a valuation of approximately $4.66 trillion.

The announcement was timed days before Nvidia’s own quarterly earnings report, in which analysts expected the Santa Clara giant to post revenue growth of 68% year-over-year to around $66 billion. Yet the Meta-AMD partnership introduced an unmistakable new narrative: that the AI chip market may be broadening in ways that allow a credible second supplier to command multi-hundred-billion-dollar commitments from the world’s most important technology companies.

“We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale.”— Dr. Lisa Su, CEO, Advanced Micro Devices

What Comes Next

Industry watchers will be scrutinizing the rollout closely. The first wave of MI450-based custom GPUs is scheduled for deployment in the second half of 2026, with the remaining five gigawatts expected to roll out in tranches through 2031. A smooth integration will be critical: Meta must prove that its software stack can seamlessly abstract away the hardware differences between its Nvidia and AMD clusters, a challenge that has tripped up less sophisticated operators in the past.

Meta has also separately been in reported talks with Google about potentially deploying the search company’s tensor processing units in Meta data centers by 2027, and has been developing in-house chips — though those efforts have reportedly encountered delays. Across all of these threads, the picture that emerges is of a company determined to own every layer of its AI infrastructure and to never find itself locked out of critical compute.

With Meta pledging at least $600 billion in U.S. data center and AI infrastructure spending over the next several years — including up to $135 billion in capital expenditures in 2026 alone — Tuesday’s AMD announcement may ultimately be remembered not as a singular event, but as one chapter in the most aggressive technology infrastructure buildout in the history of the internet.

Leave a Reply

Your email address will not be published. Required fields are marked *