ARM servers are making big promises, but do they deliver? Let’s dive into the real-world pros and cons of ARM in enterprise data centers, from cost and performance to the messy realities of deployment.  

So, you want an ARM server

ARM processors have become ubiquitous in mobile, desktop, and laptop computing. In the hyperscaler cloud space, major providers like AWS, Microsoft, and Google have developed custom ARM processors—Graviton, Cobalt, and Axios, respectively. These offer cost and efficiency benefits, prompting IT leaders to explore ARM-based solutions for on-premise deployments. 

Despite the availability and maturity of ARM in the hyperscaler space and consumer device market, folks looking to deploy ARM solutions on-premise in their data centers are still facing a challenge in 2025. ARM server offerings are still nascent, with limited offerings from key players like HPE and Supermicro. However, the absence of major manufacturers like Dell in this space means that IT leaders have fewer choices when it comes to selecting ARM-based solutions for their on-prem deployments. Let's explore what is available and why there might be limited options for the next few years.  

Is ARM right for you? Jump to the decision tree ↓

First things first: What’s an ARM server? 

An ARM server is a server (not a CPU!) that uses ARM-based processors instead of traditional x86 processors (like those from Intel or AMD). ARM processors are designed with a focus on power efficiency, high core counts, and scalability, making them particularly attractive for cloud, edge computing, and specialized workloads. 

Why ARM servers are taking over data centers 

  • These processors have become a game-changer in the server world largely because of their impressive power efficiency. Think about running a massive data center - every watt saved per processor adds up to serious cost savings on electricity and cooling. ARM chips excel at delivering more computing power per watt compared to traditional options. 
  • Cost is another major factor driving adoption. Server-grade ARM processors typically hit a sweet spot in pricing that makes CFOs happy - they're notably less expensive than equivalent high-end x86 chips while still delivering the performance modern workloads need. 
  • The big cloud players have put their money where their mouth is. AWS, Microsoft, and Google have each poured resources into developing their own custom ARM chips (Graviton, Cobalt, and Axios respectively). When tech giants make this kind of investment, it's a clear sign the technology has serious potential. 
  • Processing power is where things get really interesting. ARM servers often come packed with high core counts, making them perfect for the kind of parallel processing that modern applications demand. They can juggle countless tasks simultaneously. 

Who's jumping on the ARM bandwagon? 

  • Cloud providers were among the first to see the potential. They've been rolling out ARM-based instances for everything from web hosting to complex microservices architectures. 
  • The AI and machine learning crowd has found a sweet spot with ARM servers, particularly for inference workloads and edge computing scenarios where power efficiency really matters. 
  • Content delivery networks have embraced ARM architecture to handle the demands of video streaming and web content distribution. 
  • Traditional data centers have been slower to make the switch, but that's changing. More companies are exploring ARM servers as they look to modernize their infrastructure with an eye on both performance and operating costs. 

The ARM server market breakdown 

To understand the ARM server landscape, let's define the term 'proprietary' in the context of ARM servers. 'Proprietary' refers to technology that is exclusive to a company and not broadly available for purchase. In the case of ARM servers, this means that certain components or designs may be unique to a specific manufacturer or not available for purchase separately. However, proprietary offerings can exist on a spectrum—some, like NVIDIA GPUs, are widely sold despite proprietary design, while others, like AWS Graviton, are entirely restricted to internal use. Understanding this concept is crucial when exploring ARM-based solutions for on-premise data centers. 

So, let's take the ARM server market and slice it up into three layers.  

  • P1: Fully proprietary hyperscaler CPUs (AWS Graviton, Azure Cobalt, Google Axios) – Not available for purchase. 
  • P2: Partially Proprietary - Enterprise-grade ARM servers (e.g., HPE RL300 Gen11) – High quality, expensive, but presently scarce. 
  • P3: Commodity ARM servers from vendors like Supermicro and Gigabyte – Lower cost, less reliability, and fewer premium features but are widely available. 

P1 

P1, the most proprietary, is the hyperscaler implementation of ARM CPUs. These are in-house developed custom silicon and are only available to the companies that designed them. You cannot buy an AWS Graviton CPU. You'll never see a server you can put in your data center with one of these chips. While ARM offerings from the public cloud providers are very mature, they aren't available outside their ecosystems.  

P2 

With P2, we start talking about servers you can buy. These usually come from HPE, Dell, and Cisco but contain some proprietary technology. The P2 class servers are akin to the NVIDIA model we mentioned earlier. Some intellectual property is baked in (a custom main board, a BMC controller, etc.), but you can use it if you can afford it. These servers are high quality, more expensive, reliable, and market leading. In this class of servers, there is only one ARM-based offering from HPE. For all practical purposes, nothing is available from the flagship server providers.   

P3 

Finally, we have the P3 class of servers, which are more assemblies than fully integrated offerings, as you see in the P2 class. Manufacturers such as Gigabyte and Supermicro might design one part, buy commodity components, design a chassis, and stick all the parts in it. While these style machines work, you have lower quality and reliability with limited advanced features and nice-to-haves. P3 class servers are also less expensive and available to anyone. These are a great solution if you have a large, distributed workload that can handle failures! In the P3 class, we see about a dozen different ARM offerings.  

How is it that the public cloud providers have mature ARM server offerings, but the on-prem space is still lagging?  

Public cloud providers realized the opportunity years ago and started designing and developing custom ARM solutions early on. Five years ago, there were no commodity ARM CPUs available at scale. Additionally, the hyperscalers have deep pockets, engineering expertise, and scale to pull off in-house custom silicon. So, in short, they had a head start and a nice tailwind. In contrast, the on-prem server market relies on the availability of commodity CPUs and there were no ARM options available.  

So, now they have these efficient and proprietary ARM chips. But they also had something else going for them. Hyperscalers use extremely stripped-down servers. They don't have redundant power supplies or fans. Their servers are barren chassis. Their platforms are designed for and expect these austere machines to fail. So, they can ignore many of the requirements on-prem consumers need and want.  In short, their servers are less sophisticated and cheaper to build at scale because the requirements are fewer.  

The public cloud providers control their entire stack, from the custom silicon to the drivers and operating systems. Everything is and can be proprietary. However, the consumer solutions need to work for everyone. So, a server for the consumer space needs to adhere to specific standards such as dimensions. Folks want to use standard memory formats, off-the-shelf disk interfaces, and PCIe accessories.  

The on-prem server market relies on complete server solutions that meet high standards for redundancy, remote management, and monitoring. Because of these standards, the on-prem server market relies on networks of technology vendors and providers and this creates friction. In contrast, public cloud providers develop and design according to their standards since it's all proprietary. So they can evolve and move quickly instead of waiting on outside technology vendors to start offering ARM-compatible solutions.  

So, what do we see in the on-prem space today?  

First, let's look at what CPUs are available. Despite a massive install base on mobile and desktop (Apple Silicon), there are very few ARM CPU options for servers. As of this writing, the two primary vendors are Ampere and Marvell. NVIDIA has some specialized options, but we’re going to exclude them for now since we’re focused on general purpose servers.  

Ampere offers AmpereOne with a whopping 192 cores and the AmpereAltra with more modest options between 32 and 128 cores. Both processors operate at lower watt-per-core specs than their Intel and AMD x86 counterparts at the expense of lower per-single-core performance. Both the AmpereOne and the Altra are available today through various server manufacturers.  

Returning to the P1 - P3 proprietary scale, we'll ignore the P1 class servers since those aren't available to the general public. However, of the P2 vendors, HPE is the only vendor with a single lonely ARM option. The HPE RLE300 includes a single AmpereAltra or AmpereAltra Max with up to 128 cores and 4TB of memory. It also has all the remote management and redundancy features we've come to expect from HPE Proliant servers. But that's it; HPE has nothing else. Dell is notably absent. Let's take a step down and survey what's available in the sub-tier P3 market of commodity servers.  

In the P3 category, we have quite a few more options, but Ampere processors alone power all options available. The two vendors with the most to offer are Supermicro and Gigabyte. Many choices exist here, including AmpereAltra, Altra Max, and AmpereOne options. Unlike the RLE300, some two socket options also allow up to 256 cores in a single server. Additionally, a handful of boutique shops, such as System76, are building generic servers, too.  

While it's great to have so many choices in the P3 category, our experience is these vendors tend to produce a lower-cost product at the expense of quality and features. Suppose you can identify a configuration that meets your needs and doesn't need advanced BMC or storage capabilities. In that case, these machines can be highly cost-effective and worth considering, especially if you have a workload that can handle failures transparently.  

Are ARM servers worth it?  

It depends on what you are measuring. If you compare the performance of the Ampere chips to similar AMD EPYC chips, the EPYC chips are outperforming their ARM counterparts in speed. While per-core performance is lower than x86 counterparts, ARM's lower power consumption and higher core density allow for greater throughput efficiency in distributed workloads like Kubernetes clusters, microservices, and CDN edge nodes. Just don't expect jaw-dropping single-core performance.  

Can you realize cost savings?  

Yes, probably. At first glance, even though the server offerings are limited, they are cheaper than x86, requiring lower capital investment. An Ampere with more cores than an EPYC costs less. Power consumption is less under load, but you must keep your servers busy because they use more power when sitting idle than their mature x86 counterparts. However, loading the ARM CPUs requires less power than x86, so busy workloads will use less power.   

What about software?  

Fortunately, because the hyperscalers have offered ARM-based instances for many years, there are mature ARM-based distributions of Linux, and most open-source software has been ported over and is ARM-compatible. However, you might still run into issues with ARM-based drivers for hardware. The industry is accustomed to offering x86 versions of drivers, for, say, 25G network interface cards, and might not have ARM versions of those drivers. So, you have to pick your hardware carefully and be prepared for limited options.  

While there is solid Linux support, few comprehensive hypervisor options are available. Linux native KVM virtualization is supported, but more complete enterprise offerings like VMware and Proxmox have yet to be ported to ARM. If you require virtualization, you must roll your solution with KVM, LXD, or containerization with K8s.  

It should be noted that Windows Server support for ARM is still immature. Microsoft expects to offer ARM support with the upcoming release of Windows Server 2025. Even fewer software vendors offer ARM-based Windows versions of their software and drivers compared to Linux. For now, Linux is the only mature operating system option available today.  

What’s next for ARM in on-prem data centers?

The ARM server ecosystem is rapidly evolving, but significant questions remain. Will major vendors like Dell and Lenovo enter the market with enterprise-grade ARM offerings? How will upcoming advancements from Ampere, such as the anticipated AmpereOne M with 256 and 512 core options, shape the competitive landscape? Additionally, as ARM adoption grows, we may see broader software ecosystem support, improved enterprise hardware compatibility, and more diverse workloads shifting to ARM architectures. 

Meanwhile, Intel and AMD are responding with new x86 architectures focused on efficiency, potentially narrowing ARM's power-per-core advantage. At the same time, NVIDIA is pushing ARM adoption with its Grace Superchip, targeting AI and HPC workloads. The next few years will determine whether ARM servers become a mainstream on-prem option or remain a niche solution for specialized workloads. 

For IT leaders, the key takeaway is to stay ahead of the curve. The time to explore ARM isn’t five years from now—it’s today. Whether through proof-of-concept testing, pilot deployments, or hybrid cloud strategies, organizations that start evaluating ARM now will be best positioned to capitalize on its future advancements. 

Should you try ARM today?

Can you recompile your application for ARM?

If no: Check out our x86 offerings. We've got some very efficient AMD epyc options with high core counts.
If yes: Try ARM.

Do you need more than a few dozen servers?

If no: Managed private cloud or managed colo might be a better fit. You need scale to take advantage of ARM (economically and performance-wise).
If yes: Try ARM.

Does your workload scale well horizontally?

If no: Look at huge beefy servers with high clock speeds or consider refactoring your application for distributed scaling. Scaling deep isn't optimal for ARM.
If yes: Try ARM.

Is your platform resilient to server failures?

If no: Consider more mature servers with redundancy and investigate application patterns that address infrastructure failures. The P3 servers available are still a bit janky and lack some redundant features available in the P2 class of servers.
If yes: Try ARM.

Do you have the budget and time to POC the equipment?

If no: Consider renting hardware from a colocation provider or use s cloud-based solution. It's a nascent environment, so you need to test and play with what's available.
If yes: Try ARM.

Are you nerdy and scrappy and like to save money with efficient and clever DIY engineering?

If no: Colocation or managed services might be a better option. You need to be clever and skillful to take advantage of it in 2025.
If yes: Try ARM.

If you answered yes to all of these, then yes, you should try ARM-based servers. ARM servers could offer a compelling advantage if your infrastructure requires high core density, scales horizontally, and prioritizes power efficiency over raw per-core performance.  

While the on-prem ARM server market is still evolving, vendors like HPE, Supermicro, and Gigabyte are making strides in providing viable alternatives to x86. However, careful planning is essential. You must consider software and driver compatibility and operational resilience to ensure a successful deployment.  

At Summit, we specialize in helping IT leaders navigate this shifting landscape. Whether you're evaluating ARM for cost savings, sustainability, or workload optimization, our experts can guide you through hardware selection, performance benchmarking, and integration strategies tailored to your needs. Contact us today to explore how ARM can fit into your data center roadmap and gain a competitive edge in your IT infrastructure.

Eric Dynowski

Eric Dynowski has been developing software, designing global infrastructures, and managing large technology installs for over 25 years. His background in complex infrastructure design and integration has reduced customer budgets by millions.

Eric Dynowski