The end of Moore’s Law: where to next?

Microchip Clock

Microchip Clock

In the second article of the series, Memeburn columnist Graeme Lipschitz discusses how technologists are innovating to take computer processing and storage forward.

So we’ve run out of surface area and going all multi-core has only helped so much.

What now?

One way to go is up.

Skyscraper chips

Think of it this way: when humans had taken up too much of the “ground floor” they started making skyscrapers. A similar effect is envisioned for chips and storage – going “3D” – so to speak. The only problem at the moment is that these new chips can only be written on once – making the oft used capability of rewriting your flash storage obsolete.

IBM is teaming up with 3M (the adhesives dudes) to make electronic “glue” that is going to be able to make data travel faster vertically through the chip.

Bernie Meyerson explains: “Electrical signals across chips and circuit boards travel at a fraction of the speed of light. Yet at today’s blistering clock frequencies, you can lose hundreds if not thousands of clock cycles in a system waiting for a request for data to go from a logic chip to a remote memory bank and back.” By layering these chips on top of one another, the distance that the signal needs to travel can be cut down by 100x at least, and this can have huge implications for performance.

Wireless chips

Normal microprocessors have wires which (in relative terms) inhibit the transfer of electricity. Enter some bright young things from MIT’s Microphontic Centre who envisage processors were wires are replaced with flashing Germanium (Ge) lasers that use infrared light to transmit data. Jurgen Michel, resident MIT boffin, has more: “As processors get more cores and components the interconnecting wires become clogged with data and are the weak link. We’re using photons, rather than electrons, to do it better.”

This mixing of traditional silicon materials and optical components is known as “silicon photonics” and it typically sees a chip being crisscrossed with a series of subterranean tunnels and caverns to transmit the pulses of light; where tiny mirrors and sensors relay and interpret the data. Cutting out the use of wires also requires less energy and also generates less heat on the chips – something which has become an issue with higher clocking speeds.

Reram or memristors

On the storage side of things, innovation is being lead by HP which has come out with ReRam, or as it is commonly named: The memristor. This not to be confused with memresistor and is a resistor with memory that was invented in 1971 by University of California, Berkeley, professor Leon Chua. HP Labs, though, publicly demonstrated it in 2008.

HP used alternating layers of titanium dioxide and platinum and, under an electron microscope, they look like a series of long parallel ridges. Below the surface is a similar setup at a right angle, producing a grid-like array.

Memerisitor

“Think of it as a series of cubes that are 2 to 3 nanometers (nm) on a side,” says R. Stanley Williams, the main Memristor Kanobe at HP Labs. Memory cells are created by connecting two adjacent wires with an electrical switch beneath the surface of the array. By adjusting the voltage applied to the cubes, scientists can open and close tiny electronic switches, storing data like traditional flash memory chips.

The cool thing isn’t that these chips will be able to store double the amount of data (for their size) as current flash drive chips, but that these will be 1000x faster than current chips and will have a lot more integrity: current flash memory can be rewritten 100 000 times, whilst these could last for millions of rewrite cycles.

ReRAM can also replace the current dynamic RAM setup in computers: current RAM gets wiped when the computer is turned off, but because ReRAM won’t lose its contents so when the power goes down, the chip retains the data. This means “booting up” and waiting for the computer to get to a state where it’s operational, could be a thing of the past.

Additionally, Williams reckons its possible to stack ReRAM arrays on top of each other, skyscraper style, which could lead to greater improvements in volume and performance as mentioned previously.

“There’s no fundamental limit to the number of layers we can produce,” adds Williams. “We can get to petabit chips within about 10 years.”

Programmable chips

“Think of it as like a department store with eight floors, you would take an elevator to go between floors to shop for different items. But rather than having eight different physical floors, each with its own internal arrangement and assortment of goods, Tabula has figured out a way to have a single layer (or floor) that reconfigures itself as needed,” says Steve Tieg, Tabula President and CTO.

Tabula’s ABAX chip uses reprogrammable circuits that can change their abilities on demand. Its current products deliver the equivalent of up to eight different chip layers that can be changed in 80 picoseconds. By being fed with its next series of assignments so quickly, the circuits exceed the computational cycle. That way, the layers can be changed on the fly while the chip is waiting for its next commands.

“It’s as if while you’re on the elevator, they’re inside rearranging the floor to create a different layout with different products, it looks to the outside world as if there are eight floors, but there’s only one,” says Tieg.

Instead of going “skyscraper”, Tabula’s programmable chip just rearranges the ground floor to better deal with what is demanded of it.

According to Tieg, the technology can increase the density of circuits twofold, and memory and video throughput can be boosted by as much as 3.5 times. And he’s quite bullish about the future of these chips too, adding that: “There’s no limit to the number of levels we can integrate.” Meaning that more than 8 levels of normal chips could be accommodated by a single chip.

Graphene

Graphene, more commonly referred to as soot, is an atom-think later of carbon atoms arranged in a hexagonal pattern that kick silicon’s ass in terms of it performs electronically: it’s much faster, can be made with very small features and the green folks will be encouraged by the fact that it uses less power.

Graphene has been played with since the 1970s but researchers had issues with making it super thin. University of Manchester researchers Andre Geim and Konstantin Novoselov successfully produced graphene layers in 2004 (this and other advances in graphene research earned them the 2010 Nobel Prize in physics), and the field has advanced rapidly since then.

Because graphene can be used to make electronic structures as small as 1 nm, it could yield a terahertz processor that roughly 20 times faster than today’s best silicon chip. But, while graphene chips are starting to emerge this year, don’t hold your breath and expect them to be in the next MacBook iteration. Because they’re so novel and expensive, graphene chips will most probably be used in speciality cases where the cost doesn’t matter as much as the high speeds and low power usage. With that said, chips that ended up being in mainstream PCs started out as expensive, speciality chips somewhere.

Besides all of these advances in chip technology, I’m still sure our children will laugh at us speaking about on-device storage the same way we laughed at our elders who spoke of 1’44” stiffy disks. The more internet speeds increase, the less we need to rely on hard drives to carry our data. Let’s leave storage problems up to the cloud owners like Google and DropBox and worry more about what how we can create software that doesn’t suck the life out of this new processing power.

Image: Intel

More

News

Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.