The end of Moore’s Law: a brief history

computer circuit

computer circuit

In this first of a series of articles, Graeme Lipschitz discusses what Moore’s law is, some theory and history behind it, and why it is coming to an end. In the second article of this series, he will discuss how technologists are innovating to take computer processing and storage forward.

You’ve heard of Moore’s and Kryder’s Laws respectively, right? The ones where semiconductor chip performance doubles (Moore) every 24 months or so and storage space doubles (on the same area) while the cost of it halves every 14 months (Kryder).

The improvement in chip speed, storage space and even pixel size in digital cameras has left an indelible effect on the global economy. Well, apparently, that’s all going to end: we’re hitting the limit of the number of electrons that can fit in a certain area.

Michio Kaku, resident theoretical physicist Yoda-type, explains things nicely in this video:

Eli Harari, CEO of awesome flash drive company Sandisk, explains: “When we started out we had about one million electrons per cell… We are now down to a few hundred.” This simply can’t go on forever, he noted: “We can’t get below one.”

Binary used to dictate whether cells contained a 1 or a 0 and based on that combination, data was read. Newer technology has utilised electrons in these cells and depending on that number, it can write and read up to 16: recording a number between zero and 15, or four bits to a computer. Just in case you’ve forgotten – when we talk about electrons, we’re almost at atomic level in terms of size: a radius of 2.8179 × 10−15 meters.

Bernie Meyerson, IBM Fellow and VP of Innovation, describes it succinctly when he says: “1) atoms don’t scale; 2) silicon devices go “quantum mechanical” at dimensions of about 7 nanometres; and 3) light is getting too slow and electrical signalling even slower.”

moores-law for apple

In 2005, Herb Sutter indicated that 2003 signalled the end of Moore’s Law for single-core CPUs and that companies like Intel and AMD had to resort to multi-core processors in order to keep up with the performance demands of consumers. So, instead of revving up the clock speeds and straight-line throughput of the CPUs even higher thereby risking overheating, chip producers simply multiplied the number of cores on the CPU chip.

Chip Speeds

To add to the woes of the chip manufacturers, software has always found a way to gobble up the increase in performance by piling on more processes for the chips to complete; this is known as May’s law: “Software efficiency halves every 18 months, compensating Moore’s Law.” So, for every 10x improvement in processing power, software has piled on 10x the amount of work for these processors to do.

So what now? We’ll discuss that in the next article in this series.

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.