Skip to content

Moore’s Law Will Soon End, But Progress Doesn’t Have To

2016 March 2
by Greg Satell

In 1965, Intel cofounder Gordon Moore published a remarkably prescient paper which observed that the number of transistors on an integrated circuit was doubling every two years and predicted that this pace would lead to computers becoming embedded in homes, cars and communication systems.

That simple idea, known today as Moore’s Law, has helped power the digital revolution. As computing performance has become exponentially cheaper and more robust, we have been able to do a lot more with it. Even a basic smartphone today is more powerful than the supercomputers of past generations.

Yet the law has been fraying for years and experts predict that it will soon reach its limits. However, I spoke to Bernie Meyerson, IBM’s Chief Innovation Officer, and he feels strongly that the end of Moore’s Law doesn’t mean the end of progress. Not by a long shot. What we’ll see though is a shift in emphasis from the microchip to the system as a whole.

Going Beyond Silicon

The end of Moore’s Law is not a new issue. In fact, Meyerson argues that it first began unraveling in 2003, when insulating components within transistors began failing due to quantum mechanical effects. Since then, chip manufacturers have been finding new materials that are more resistant to decay in their basic atomic properties and progress has continued.

However, sometime around 2020, these workarounds will no longer suffice as the silicon itself yields to quantum mechanical reality. Some researchers, including at IBM, are pursuing strategies like carbon nanotubes and silicon photonics that have the potential to increase chip speeds even without having to shrink chips to quantum scale.

Other approaches, such as quantum computing and neuromorphic chips, change the nature of computing itself and can be exponentially more efficient for certain tasks, such as pattern recognition in the case of neuromorphic chips and encryption in the case of quantum computers. Still, you wouldn’t want either of these running your word processor.

As Meyerson put it, “Quite frankly, for general purpose computing all that stuff isn’t very helpful and we’ll never develop it in time to make an impact beyond specialized applications over the next 5 or 10 years. For the practical future, we need to change our focus from chip performance to how systems perform as a whole by pursuing both hardware and software strategies.”

Integrating The Integrated Circuit

One way of increasing performance is by decreasing distance at the level of the system. Currently, chips are designed in two dimensions to perform specific functions, such as logic chips, memory chips and networking chips. Although none of them can do much by themselves, acting in concert they allow us to do extremely complex tasks on basic devices.

So one approach to increasing performance, called 3D stacking, would simply integrate those integrated circuits into a single three dimensional chip. This is harder than it sounds, because entirely new chip designs have to be devised, but it would vastly reduce the time circuits need to wait for instructions from each other and increase speed significantly, while decreasing power dramatically due to far shorter communication paths.

In truth, this is not a new strategy but rather one that was deployed in the 1960’s to overcome a challenge called the tyranny of numbers. Simply put, the physical requirements of wiring thousands of transistors together was putting practical limitations on what could be designed and built. That’s what led to the invention of integrated circuits in the first place.

Meyerson says, “when we moved from transistors to integrated circuits, we shrunk an entire rack measuring about 40 cubic feet down to a single board measuring 19 x 26 inches.  3D stacking will shrink that board down to less than a square inch and we can potentially get an increase in power performance of at least 10-100 fold.

Building Intelligently Agile Systems

In the 1980’s, chip manufacturers began building specialized types of chips, called ASICs, that were highly optimized for specific tasks, such as running complex financial models. These would significantly outperform conventional chips for those specific tasks, but ultimately, the process of hardwiring proved too expensive and unwieldy to be a viable strategy.

Yet Meyerson sees vastly more potential in a newer approach called FPGA, that can be re-purposed on the fly through software. He points to Intel’s recent purchase of Altera as a strong indication that things are moving in that direction. It is well known that in specific applications FPGA’s can produce gains of ten-fold or more in computing performance, but most importantly, that system level gain is not restricted to a single application.

The FPGA approach is a major improvement because rather than going through a roughly 18 month process to design and manufacture a specialized chip, the same thing can be done in a matter of weeks. However, Meyerson thinks the potential may actually be far greater than that if we can build intelligent software that can reprogram the chips autonomically.

“So for example,” Meyerson says,” while you’re writing a document, your laptop would be configured to do exactly that, but if you then needed to run a simulation of some financial data for that same report, your system would re-optimize itself for deep computations required. Such “intelligent” architectures and the enabling software are the next grand challenge in IT.”

“Take this idea a little further,” he continues “and you can see how new technologies like neuromorphic chips and quantum computing can deliver an enormous impact even as specialized systems in the cloud. Imagine being able to access the capabilities of a neuromorphic system for photo recognition and search while shopping, and then instantly switch to a quantum computer to facilitate the transaction with unbreakable encryption.”

The Future Of Technology Is All Too Human

Back in 1965, when Gordon Moore formulated his famous law, computers were enormous hunks that few people ever saw. After 20 years of continuous doubling, we got personal computers small enough to fit under our desks, but powerful enough to generate a graphical display and interact with us through a keyboard and a mouse. 20 more years gave us the mobile revolution.

The future of technology is always more human and Meyerson expects that,”by 2020, we’ll still be improving system performance exponentially, but we’ll have to change our conception of information technology once again, this time from machines that store, analyze and retrieve information to systems that are active partners in a very natural human/machine collaboration.”

“The cognitive era will be ultimate bridge across the digital divide,” he notes, “spanning barriers of not only technology, but that of language, education and skill level as well. IT will essentially become so advanced that it disappears along with previous institutional barriers. Even a teenager will have access to resources that only the most well equipped research facilities have today and they will be able to access it in real time.”

But perhaps the most important consequence of Meyerson’s vision of cognitive computing is not how it will change how we work with computers, but with each other. Before the industrial era, people were valued for their ability to do physical work. In the knowledge economy, those with strong cognitive skills were considered “the best and the brightest.” Now, we will likely see a new shift in value.

In the future, when machines can do cognitive tasks more effectively than any human, we will likely find that competitive advantage will go to those who can collaborate effectively, with both people and machines. So the key to the future lies not so much in chips and algorithms as it does within ourselves.

– Greg

2 Responses leave one →
  1. April 4, 2016

    One minor detail- Moore’s law didn’t create the technology revolution per se. It simply described a technology trajectory, and very accurately. How important was this predictive to in stimulating investment, R&D, and production? And what new metrics will we need for IOT evolution?

  2. April 4, 2016

    Good questions. Thanks Clifton.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS