ARM everywhere

In the last couple of months, we have seen a lot of news around ARM. More specifically, the most relevant ones, in my opinion, are (sorted by date):

  • On May 11, AWS announced the availability of new instance types (M6g, C6g, R6g) based on Graviton2 , a new version of their in-house developed ARM processor
  • On June 17, Ampere announced a 128 core ARM processor that will be added to their current line (that includes 32, 48, 64, 72, 80 cores ARM CPUs)
  • On June 22, was made public that Fugaku, an ARM-based supercomputer, is the most potent publicly disclosed supercomputer
  • On June 22, Apple announced that Macs will move to ARM Those pieces of news demonstrate how much the processors’ landscape is changing, and how fast the rate of change is.

The topic I would like to focus on is why, today, the x86 platform is not that appealing as it was in the ’90s when it became the de-facto standard. I think that there are two primary causes for this change of direction:

  • the current focus on efficiency more than raw performance
  • the usage of higher-level programming languages

Today focus on efficiency

Back in the ’90s, the processors were far slower than the current processors, and even though the software was way slimmer, more often than not, the user was waiting for the machine to complete a task. I remember that on the PC we had at home at the time (1997/1998), Windows 95 took a few minutes to boot to the desktop. It was better to wait an additional minute so that all operations completed before starting an application. After that, many applications (like Office 97) took at least a minute to start. Saving a multi-pages Word document was an operation that usually took 10-20 seconds. Today those kinds of latencies are not expected and would probably not be acceptable. This improvement mainly happened because the CPUs performances increased a lot more than the bloatedness of the applications.

At the same time, we ask for devices that are lasting more on batteries; in fact, current laptops often exceed the 12 hours of autonomy, while just a few years ago, we were positively surprised by notebooks that passed the 2 hours mark. This achievement has been made possible by the lower TDP of the CPUs (as well as other components) and by the enlargement of batteries. The latter, though, was much smaller than the first. Consider that the first generation of Intel i7 processors was released between November 2008 and June 2011, with two models in the “low power” line (Core i7-860S and Core i7-870S) having an 82W TDP. This TDP was much lower than the other processors of the same generation since the other had 95W (4 CPUs) and 130W (11 CPUs) TDPs. On the other hand, the current generation of i7 (the 10th one), at the moment of writing this post, counts 12 CPUs with much lower TDP: 7W (1 CPU), 9W (1 CPU), 15W (5 CPUs), 28W (2 CPUs), 45W (3 CPUs).

As you can see, the optimization of the CPUs has significantly reduced consumption, increasing the battery length.

Similarly, in the datacenters, there is a demand for more efficient systems, thanks to the cloud model, where the company that provides the machines (or services) pays the electricity bill and therefore wants more efficient computing.

The demand for more efficient CPUs drove the last ten years of Intel development, while the ARM CPUs had to work on their performances at the cost of increasing the TDPs significantly since they were at less than 1W of TDP. Now the two architectures are at comparable performances with comparable TDPs, even though Intel CPUs are still power-hungry compared to ARM ones.

Higher-level programming languages

In the last decades, we have seen an increasing level of abstraction in programming languages, and more generally, in IT. Due to its nature, very often, this increased abstraction is not perfect and usually ends up being a leaky abstraction. This leaky abstraction means that even though the abstraction level you are working with should have abstracted a problem, it has not been done completely. Therefore, you still understand and deal with the various cases that should have been abstracted away. Even though the different instructions a CPU implement is a very old problem and can be solved in many ways, such as using a compiler, a virtual machine, or an interpreter, in the 90s, the majority of code was CPU-dependent. Today the vast majority of newly-written code is completely CPU-agnostic. This shift is what is enabling the breakage of the x86 monopoly.

What’s next?

I think that this trend will continue for many years to come. In the next few years, ARM will grow a lot and that it will not be as obvious as today that the laptop or server we are buying is x86.