IBM Has Created A Revolutionary New Model For Computing—The Human Brain
Technology isn’t what it used to be. 40 years ago, computers were strange machines locked away in the bowels of large organizations, tended to by an exclusive priesthood who could speak their strange languages. They were mostly used for mundane work, like scientific computation and back office functions in major corporations.
Yet by the 1980’s, technology had advanced enough to develop relatively cheap computers for personal use. No longer were they relegated to back rooms, but began to appear on desktops in homes and offices to be used for writing letters, doing taxes and even playing games.
We’re now entering a third paradigm, in which computers have shrunk even further and assist us with everyday tasks, like driving to a friend’s house or planning a vacation. These jobs are very different because they require computers to recognize patterns. To power this new era , IBM has developed a revolutionary new chip modeled on the human brain.
The Brilliant Design Of The Human Brain
Our brains work relatively slowly, at about 200 MPH, which is no match for the speed-of-light calculations of computer chips. However, even young children are able to do many things with ease that machines have great difficulty with, such as recognizing a face or catching a ball.
The reason why is that our brains are, in technological parlance, massively parallel. Each one of our billions of neurons can, potentially at least, communicate directly with every other one. As we gather more experiences, synapses multiply and strengthen, wiring our brains to weave disparate pieces of information into familiar groupings that we can act on efficiently.
We call this process learning and it really is an incredible thing. To understand it better, Dharmendra Modha, a research scientist at IBM, used a supercomputer to run a simulation of a human brain. What he found astonished him.
Even using 1.5 million processors and 8 megawatts of electricity —enough to power about 4000 homes—the supercomputer ran about 1,500 times slower than a human brain! Doing some quick calculations, Modha estimated that essentially meant that our brains are about a billion times more efficient than today’s computer architectures.
He realized that if he could close even a small part of that gap, he could produce something truly revolutionary. So he got to work on designing a chip that would be completely different from anything anyone had seen before—a machine that combines the lightspeed calculation of silicon based computers with the design of the human brain.
A Revolution In Chip Design
To get started, Modha accessed a database of the wiring of the macaque brain, a close cousin to humans, and analyzed how the billions of neurons were networked together. By studying how these connections were arranged, he began to see how he could create a massively parallel circuit that mimicked the efficiency of the brain.
Using what he learned, he wired together his own model into a single core containing 256 neurons and more than 64,000 synapses. He then shrunk that core down by a factor of 10 in size and by a factor of 100 in power consumption and linked the cores together in a 64×64 array to create his “neuromorphic” chip, called TrueNorth.
It wasn’t just the wiring that was different, though. Conventional chips are made up of tiny transistors that act as switches to generate ones and zeros. These transistors, in turn, are arranged in Boolean logic gates that represent statements such as AND, OR” and NOT. That, essentially, is the grammar through which traditional computers understand the world.
In order to prevent the machines from devolving into a cacophony of garbled signals, computers run on clocks, so that different parts of chips stay in sync. The problem is that the computer’s “brain”—called a CPU—often needs to wait for memory chips to send it data. This interim, called a von Neumann bottleneck, is massive waste of time and power.
Modha’s neuromorphic chip works fundamentally differently. Each neuron fires when the signals it receives reach a certain threshold, at which point it “spikes” and sends a signal to another neuron, closely integrating the disparate parts into a seamless whole. This obviates the need for clocks, dramatically lowers power consumption and allows the chips to run in a massively parallel fashion—just like the human brain.
A New Era Of Computing
Neuromorphic chips have three attributes that will make them an important part of the new era of computing. First, their architecture makes them ideal for pattern recognition tasks, like speech and image recognition, medical imaging and understanding patterns in data so that systems can do things like identify fraudulent or criminal activity in a network.
Second they are scalable down to a single core or up to a large array. So a neuromorphic core—which is only slightly bigger than the width of a human hair—can be embedded in sensors to process information at the source. “Instead of bringing data to computation, we can now bring computation to data,” Modha says.
On the other end of the scale, IBM recently installed an array of sixteen chips at Lawrence Livermore National Laboratory, which contains 16 million neurons and 4 billion synapses, to work on advanced simulations and machine learning applications. This array, which is not much bigger than a shoebox, has the potential to replace large supercomputers performing similar tasks.
Finally, neuromorphic chips are thousands of times more efficient and require only a miniscule fraction of the power that conventional chips do. So robots can be made to run with far smaller battery packs and our smartphones will be able to handle machine learning tasks—like voice recognition and navigation—without draining the battery.
The Road To Transformation
As I’ve explained before, innovation is never a single event, but involves the discovery of an insight, the engineering of a solution and the transformation of an industry or field. Clearly, Modha has accomplished the first and second stages, but the last leg can often take decades. A technology hasn’t really arrived until it’s useful to plumbers and store clerks.
So Modha and IBM have been aggressively putting its revolutionary new chips into the hands of researchers who can develop path breaking new applications for TrueNorth. There are currently 100 chips in circulation at academic and governmental institutions and more to come. The firm is also offering workshops, so that engineers can get up to speed with how the chip functions.
In the coming years, IBM will be working with clients, such as those in the firm’s analytics practice, to develop specific applications based on the TrueNorth chips. It is also developing a network of partners to help create next generation products in areas like robotics, medical imaging and autonomous cars.
For the rest of us, the neomorphic chip revolution will be mostly invisible—few of us will ever see one or recognize it if we did—but we will notice a change in how we work with technology. Rather than hyper-rational calculating machines, computers will think more like we do and help us to collaborate more effectively—with each other and machines.
The future of technology is all too human.
– Greg
Skynet. Here we come.
Thomas, neurons are not essential to intelligence, in the same way as feathers and flapping wings are not essential to aviation. So, super-intelligence (Skynet) is nonsense.
Scientists fail to define intelligence in a natural way. So, how will they ever be able to implement natural intelligence is artificial systems?
I have defined intelligence in a natural way. Even though my natural language reasoner is published as open source software, it still unbeaten. Moreover, it shows that some scientific theories have fundamental problems.