
- Arm unveiled its first self-made chip on March 24 — the Arm AGI CPU — ending 35 years of licensing-only business with a production silicon product.
- The chip packs 136 cores, runs on TSMC’s 3nm process, and delivers more than 2× the performance-per-watt of x86 data center CPUs, according to Arm.
- Meta co-developed the chip and is the debut customer. OpenAI, Cloudflare, SAP, and SK Telecom are also signed on at launch.
- Arm CEO Rene Haas projects $15 billion in chip revenue by 2031. Arm stock jumped 6% the day of the announcement.
Arm AGI CPU marks the biggest change in Arm’s history. The company spent 35 years licensing chip designs to others. Now it is making its own.
The new chip launched on March 24, 2026. It is built for AI data centers and targets the $1 trillion AI CPU market.
📢Meta is expanding deployment of NVIDIA Grace and Vera CPUs, built on Arm, across its data centers. As AI infrastructure scales, nearly 50% of compute shipped to top hyperscalers in 2025 was Arm-based. Adoption continues to accelerate, delivering high performance, energy efficiency.
— Arm (@Arm) February 19, 2026
What the Chip Actually Does
The Arm AGI CPU has 136 Neoverse V3 cores spread across two dies. It runs at 3.2 GHz all-core, with a boost speed of 3.7 GHz. The whole chip sits inside a 300-watt power envelope.
It uses TSMC’s 3nm manufacturing process — the same node used in high-end mobile chips. It supports PCIe Gen 6, CXL 3.0, and DDR5-8800 memory.
Mohamed Awad, Arm’s cloud AI chief, says the chip delivers “two times the performance-per-watt than you can from an x86 rack.” That would translate to up to $10 billion in data center savings per gigawatt of capacity.
Meta Helped Build It — and Will Be First to Use It
Meta is not just a customer here. The company co-designed the Arm AGI CPU alongside Arm’s engineers over an 18-month development period.
Meta plans to run the chip next to its own custom MTIA accelerators inside its giant data centers. The goal is to handle AI inference at gigawatt scale — the kind of workload that powers billions of daily interactions across Facebook, Instagram, and WhatsApp.
Seven other companies signed on at launch: OpenAI, Cloudflare, SAP, SK Telecom, F5, Cerebras, and Rebellions. Servers built around the chip are available now from ASRockRack, Lenovo, and Supermicro.
It’s official — the @Arm + NVIDIA Developer Community is now LIVE! We’re thrilled to bring developers: 🛠️ CPU + GPU learning pathways 🎙️ Expert-led livestreams 🏆 Hackathons and project spotlights Join us today.
— NVIDIA AI Developer (@NVIDIAAIDev) March 2026
The Numbers Behind the Business Case
Arm CEO Rene Haas laid out bold targets on launch day. He expects the AGI CPU line to bring in $15 billion per year in revenue by 2031. Total company revenue should hit $25 billion, with earnings per share of $9.
Investors liked what they heard. ARM stock climbed 6% on the announcement. The company spent $71 million building new chip labs in Austin, Texas, to make this product happen.
Why This Matters for the AI Chip Race
NVIDIA has dominated AI compute through its GPU lineup. But GPUs are expensive and power-hungry. For inference, running AI models on real user requests — CPUs can be more efficient at scale.
Arm’s entry changes the competitive landscape. It now competes directly with Intel Xeon and AMD EPYC for AI data center sockets. And unlike those rivals, it has Meta and OpenAI already committed.
The real proof will come from Meta’s deployment data in the weeks ahead. If the performance claims hold up in production, other hyperscalers will have a hard reason to take a serious look.










