The Improbable Origins of Modern Computing
They say that those who don’t learn their history are condemned to repeat it, but isn’t the opposite also true? If the past is prologue, can we really move forward without looking backward?
Futuristic buzzwords like the social web, strong AI, and the web of things have become fashionable to talk about as hot new trends, but our digital future will most likely be far stranger and more wonderful than anything we can imagine today.
The truth is that new paradigms arise from novel solutions to old problems. Those solutions in turn have consequences that are both unforeseen and unintended. Much of today’s technology began as investigations into obscure curiosities in first-order logic, radio communication and the like. The seeds of the next wave will be just as improbable.
A Hole at the Center of Logic
Logic was first developed by Aristotle in ancient times and survived for over 2000 years without any significant augmentation or alteration. It was one of those things, much like the ebb and flow of the tides or the rise and setting of the sun, that you simply accepted. You didn’t question it anymore than you would question day turning into night.
However, by the mid 19th century, the seams began to show. People like Cantor and Frege noticed some inconsistencies and tried to patch them up, but then Bertrand Russell showed that the hole was deeper and more fundamental than anyone thought. The problem can be summarized with the following phrase:
The barber of Siberia shaves every man that doesn’t shave himself.
This is known as Russell’s paradox and it devilishly difficult to deal with because it points to a instance in which a proposition can be considered to be both true and not true. The more people looked, the more they found similar examples of Russell’s paradox littered throughout logic and mathematics, leading to a foundational crisis.
The whole thing was, of course, ridiculous. It was almost as if a riddle in a crossword puzzle led physicists to question gravity. Everybody knows that 2+2 equals 4 and always will. Surely, that same principle must apply throughout mathematics and logic? It was just a matter of constructing the rules of the system correctly. Wasn’t it?
Hilbert’s Program
As seemingly trivial as the situation was, nobody could clear it up, no matter how hard they tried. There were meetings and debates, lots of hand wringing and guffawing, but ultimately, no real answer. Finally in 1928, David Hilbert, the most prominent mathematician at the time, set forth a program that would resolve the crises.
It largely centered around three basic principles to be proved:
Completeness: In logical system, every statement can be either proved or disproved by the rules of that system.
Consistency: Every statement is either true or not true. (i.e. If 2+2=4, it can never be shown that 2+2=5, it will always be 2+2=4).
Computability: For any assertion, there will always be an algorithm that can prove the statement true or false (also called decidability).
He didn’t have to wait long for an answer to the first two questions. Unfortunately, it wasn’t the answer he was hoping for. In 1931, 25 year-old Kurt Gödel published his incompleteness theorems, which showed that every system is either incomplete or inconsistent. None could satisfy both conditions.
The hole at the center of logic was just something that everyone would have to learn to live with. Logical systems, no matter how they’re set up, are fundamentally flawed.
The Universal Computer
Gödel’s paper was not conjecture, but a proof. In a very real sense, he used logic to kill logic. In order to do so, he came up with an innovative new tool called Gödel numbering. The idea was that statements would be encoded into values, which could then be combined with other assertions encoded the same way.
His method was then utilized, almost simultaneously but independently, a few years later by Alonzo Church and Alan Turing, to answer the question of computability. Much like Gödel, they found the answer to Hilbert’s question to be negative. Some values simply can’t be calculated.
Turing’s method also had an interesting byproduct, the Turing machine (a working model was recently featured as a Google Doodle), which could perform any computable sequence using elementary symbols and processes. This was the first time anybody had seriously thought of anything resembling a modern computer. Turing would write in 1948:
We do not need to have an infinity of different machines doing different jobs. A single one will suffice. The engineering problem of producing various machines for various jobs is replaced by the office work of ‘programming’ the universal machine to do these jobs.
That, in essence, is what a modern computer is – a universal machine. If we want to write a document, prepare a budget or play a game, we don’t switch machines, but software.
Alas, at the time it was just a figment in Turing’s imagination. To be practical, a machine would need to calculate at speeds considered incredible at the time and there was also the problem of encoding instructions in a way that could reliably processed and then displayed in a fashion that humans can understand.
The Zero Year of 1948
After a long and difficult gestation period, the digital world was finally born at Bell Labs in 1948. First came the transistor, invented by John Bardeen, William Shockley and Walter Brattain, which had the potential to compute at the speeds necessary to build a practically useful Turing machine.
Next was Claude Shannon’s creation of information theory. At the core of the idea was the separation of information and content. To get an idea of what he meant, take a look at the QR code at the top of the page. It surely contains information, but not necessarily content.
Nevertheless, that content (a must-see video of Tim Berners-Lee explaining his vision for the next Web) can be unlocked using the code.
However, the main achievement was that he showed how any kind of information can be encoded into binary digits or bits. That information could be content, but it could also other types of encoding, like redundancies to make messages more reliable or compression codes to make them smaller and more efficient (all modern communications use both).
It was information theory, along with Shannon’s earlier work that showed how Boolean algebra could be transformed through mechanical means into logic gates, that made the information age possible. Yet there still remained one last obstacle to be overcome before the digital world could become operational.
The Tyranny of Numbers
For all of the genius that went into creating the theoretical basis of modern computing, there remained a very serious practical problem known as the tyranny of numbers.
Complicated electronic devices require thousands of logic gates, each containing several transistors along with several other electrical components. Connecting and soldering each one by hand is an invitation to disaster. One defect can render the whole thing useless.
The solution was uncovered by Jack Kilby and Robert Noyce, who both independently proposed that all of the necessary elements of an integrated circuit could be etched onto a single silicon chip. The loss of performance from using one suboptimal material would be surpassed by the efficiencies won by overcoming the tyranny of numbers.
Today, the company Robert Noyce would help found, Intel, squeezes billions of transistors onto those chips, making Turing’s machine universal in more ways than one.
The Visceral Abstract
All of the developments that led to modern computing had one thing in common – they were all considered useless to practically minded people at the time. The hole in logic, Hilbert’s program, information theory and even the integrated circuit went almost unnoticed by most people (even specialists) at the time.
In a very similar vein, many of the questions that will determine our digital future seem far from matters at hand. What do fireflies, heart attacks and flu outbreaks have to do with Facebook campaigns? What do we mean when we speak of intelligence? What does a traveling salesman have to do with our economic future?
What is practical and what is nonsense is often a matter not of merit, but one of time and place. Our digital future will be just as improbable as our digital past.
As I explained in an earlier post, our present digital paradigm will come to an end somewhere around 2020. What comes after will be far stranger and more wonderful than that which has come before and will come upon us at an exponentially faster pace. A century of advancement will be achieved in decades and then in years.
They key to moving forward is to understand that as far as we have come we are, in truth, just getting started. The fundamental problem is not one of mere engineering, but a sense of wonder and a joy in the discovery of basic principles or, as Richard Feynman put it, the pleasure of finding things out.
– Greg
Greg,
Another great reminder why I consistently your material. Its not common to see Hilbert, Church and Shannon et al mentioned in a post from someone who makes a living in media. It’s a great title, as well. I’d not thought of it like that before, despite being in the field for decades.
Thank you Nathan. That’s very kind of you.
– Greg
I think you’ve finally given the game away. You’re already existing in the post 2020 world (a wormhole? parallel universe? time machine? Whatever.) which is what allows you the time and resources to post so bloody thoughtfully, articulately and with such insight. Isn’t that right Greg? Eh? Keep it up by the way. Do we have hover boards in 2020? Hope so
Yeah, hover boards sound pretty cool, but I’d just be happy with beer that doesn’t make you fat:-)
Thanks a lot for the kind words. Much appreciated!
– Greg
I have to disclaim that I wrote a book in my (bad) English about these subjects. From Dust to the NanoAge (2009) available on Lulu. I wrote it because there has always been a lack of historical details about this part of technology.
I want to contribute to the correctness of basic historical data, so please don’t take my comment in a polemical way. I agree with the plot of your story, indeed. Allow me to add some spicy details.
“First came the transistor, invented by John Bardeen, William Shockley and Walter Brattain”. This is not true. There were many transistors before, all of them regularly patented starting from 1925 in Canada first, US later. Shockley, Bardeen and Brattain did not patent one single transistor.
The same with Kilby and Noyce.
BTW, that’t true for the invention of the microprocessor. It was not due to the italian master Federico Faggin (who developed by himself a brand-new technology) but to the US’ son Ray Holt.
Thanks for the comment. However, I’ll stick to the conclusions of the Nobel committee, which gave credit to those I mentioned (except for Noyce, who had passed away when the prize was awarded.)
– Greg
Sure. I understand. A good point the curoius minds could start from is http://www.uspto.gov/web/offices/com/sol/og/1998/week43/patadve.htm.
Instructions are placed on the stack where they are fed to the CPU for execution. An instruction is a physically implemented symmetry with its opcode referencing implemented operability and its operand referencing implemented data. Now you whizz this baby around at massively high speeds & you generate the relevant energies that the user can plug their experience into. This device is an analytical engine & it is not an invention but a discovery [by Charles Babbage] very much like the wheel. [Which means you can create perfect operation by totally conforming to the First Law Of Thermodynamics.]
The base unit of this device is a unit of energy that conforms to a dual-state energy paradigm. When you decide to abstract your view of this you deny the laws of physics and lay this device open to vagaries of the human imagination. Like thinking you are building a philosophical logic machine that runs on numbers. It will never work.
Thanks for the comment.
Taking nothing away from either Ada Lovelace or Charles Babbage, I don’t consider his analytical engine (or the plans rather, he never was actually able to complete it) an example of modern computing which is why I left it out. Some aspects I consider crucial, (i.e. binary logic, electronic operation, etc.) are absent.
While I recognize reasonable people can disagree on the point, I do think it is salient that neither Lovelace nor Babbage were cited in any of the seminal papers (they were mentioned quite a bit anecdotally and were important historically, but that’s not quite the same thing.)
– Greg
Greg,
I am sure that we can agree that the human control mechanism of this device is of core importance. Through all the generations of computer you have referenced the way that a programmer’s instructions are delivered is a constant. There is no other way than the Babbage’ way with an instruction [as opcode & operand] being delivered:
Sequential [[conditional-iterative]nested]
Surely the person that invented this deserves a mention and the fact that his original specification is unchanged getting on for two centuries deserves recognition and not to be taken for granted. Not to mention the punch cards and the algorithms and the printer!
As for the other stuff: the computer is an abstract concept designed to generate profit from fail & it does catastrophically & consistently. I think Mr Babbage would have gotten himself written out of history by this bunch of industrialists too. All eyes on the $NYSE.
Denis
Yes, I do agree. As I said, I was taking nothing away from Babbage or Lovelace, just that they did not fit in the context of my post.
– Greg
The opposite of living by the saying, not learning from history and not repeating mistakes, is not a case in which you look backwards and learn from history.
So I didn’t read the rest of your article yet. I got here from your article on Bernie in Forbes and then the June 8th one on the singularity.
I’m not entirely sure I get what you mean, but that article will be up here on Wednesday. Also, if you like you can comment on Forbes where there is an active discussion.
awesome writings man. really love it