Skip to content

The Evolution of Intelligence

2012 October 21

The mathematician G.H. Hardy once wrote that “for any serious purpose, intelligence is a very minor gift.”  As someone who intimately knew some of the greatest minds in history, he would know.

What he meant was there many things other than intelligence that are required to succeed at an intellectual task.  Persistence and luck surely play a role and, as Einstein famously noted, imagination is supremely important.

Nevertheless, intelligence is something we admire, both in ourselves and in others.  It has been considered for most of history, to be a uniquely human virtue.  So it is unnerving, even terrifying, when we encounter other types of intelligence.  From crowdsourcing to computers performing human tasks, we’re going to have to learn to make our peace.

Personal Intelligence

Human intelligence has been the subject of intense study for over a century, mainly by way of IQ scores, which are based on standardized tests.  While often maligned they have been shown to correlate between 30% to 50% with professional achievement.

It also has a significant genetic component, which accounts for the amazing capabilities of child prodigies.  Some, like Gauss, who corrected his father’s arithmetic at the age of three and John von Neumann who could read entire books and recite them from memory in another language, went on to great accomplishments, although most geniuses do not.

On the other hand, there are many who are revered for their intelligence but do not have exceptional IQ scores.  Richard Feynman, considered by many to be one of the greatest minds of the 20th century, had an IQ of 125, high but not unusual.  Many have suggested that once you’re smart enough, higher intelligence won’t help you much.

Nevertheless, we admire intelligence because it is a personal quality.  Smart people are interesting, can solve tough problems and delight us with their sharp wit.

Collective Intelligence

Whatever an individual’s intelligence, it is, for many tasks, dwarfed by collective intelligence.  As James Surowiecki explained in The Wisdom of Crowds, collections of people are consistently more accurate than experts.  Some technologies, like InTrade and Google Flu Trends, are designed to harness the power of the hive mind to predict events.

Further, collective intelligence can be used to create as well as to predict.  Wikipedia is far more comprehensive and accurate than any encyclopedia developed by editors.  Open source software, such as Linux and Apache, runs much of our most critical technological infrastructure.

However, as I’ve written before, crowds can also be stupid.  They give us market booms and busts, angry mobs and lots of other bad things.  Most corporate acquisitions fail because of the winner’s curse, the documented tendency for firms to overpay.  Put a bunch of smart people in a crowd and they can do some seriously stupid things.

For collective intelligence to work, there must be both independence and diversity.  If the group lacks independence then feedback loops ensue, which can create runaway ideas that travel far from any semblance of reality.  Furthermore, research has shown that diverse groups outperform intelligent ones that are more homogeneous.

Artificial Intelligence

The concept of artificial intelligence got it’s start at a conference at Dartmouth in 1956. Optimism ran high and they believed that machines would be able to do the work of humans within 20 years.  Alas, it was not to be.  By the 1970’s, funding dried up and the technology entered the period now known as the AI winter, where very little happened.

Slowly, however, progress was made.  Computers became increasingly able to do human tasks, such as character recognition, making recommendations on Amazon and organizing itineraries on travel sites.  We didn’t see the algorithms at work, but they were there, computing on our behalf.

Now artificial intelligence is coming out of the shadows.  We ask Siri for directions and fight for our virtual lives in video games.  IBM’s Deep Blue triumphed in chess and Watson prevailed in Jeopardy!  Google is now developing autonomous vehicles and there are even computers that can understand art.

Ray Kurzweil believes that we will have strong artificial intelligence (machine intelligence that equals or surpasses human intelligence) by around 2030 and, as fast as things are moving, I wouldn’t bet against him.

Flying By Wire

The blurring lines of intelligence are giving us humans something of an identity crises. What happens when such an intensely human quality is usurped – first by faceless masses and then by machines?  Where will we find our place in the world?

Stephen Johnson gives us a clue in his new book Future Perfect in which he describes flying by wire, where pilots’ controls are augmented by computer algorithms which are in turn informed by the collective intelligence accumulated over countless hours of previous flights.  While the instruments are automated, the intent remains human.

In a similar vein, John Battelle has described Google as a “database of intentions.”  Even in our intensely technological world, our dreams and desires remain our own.  We retain the power to choose our actions and how we would like to carry them out.  We can augment our faculties or leave them bare.

In the future, our world will driven by machine intelligence, but our choices will remain our own.

– Greg

10 Responses leave one →
  1. October 22, 2012

    @greg,everything u said was correct, and i 2 agree that the future is gonna to be full of artificial intelligence, and the analysis which you included in artificial , personel, collective intelligence impressed me much, but the thing is that may be in our future there will be no work for human brain….

  2. October 22, 2012

    The mind tends to wander. I’m sure we’ll come up with something:-)

    – Greg

  3. October 23, 2012


    Good read. I think we need to regard the artificial intelligence as a simple engine to work with information and to solve ordinary tasks.

    We are today at the “second information barrier” when human can not manually acquire and process the growing volume of information.

    The problem of Big Data, e-collaboration and automatic business transactions (for instance, e-procurement) need the AI. It looks more like as the automatic washing machine than as a computerized brain or “thinking ocean”.


  4. October 24, 2012

    Thanks for sharing Sergei.

    – Greg

  5. Fabian permalink
    October 24, 2012

    Hi Greg,

    Just a remark to your Feynman example: Watch from 18:30 to 19:20 – I hope this convinces you that Feynman’s IQ was not in the 125 range.


  6. October 24, 2012


    That is exactly my point. Nobody disputes that Feynman was a genius, but his IQ (at least according to him as reported in James Gleik’s biography) was that he didn’t score exceptionally high on his IQ test in high school.

    I didn’t give him the test, so I can’t be 100% sure, but this has been widely reported and is probably the reason why Feynman himself was highly sceptical of intelligence tests.

    – Greg

  7. Fabian permalink
    October 24, 2012

    Yes, I understood your point and agree there are many reasons to be skeptical of IQ scores, but nevertheless I think there is a pretty high correlation between Putnam Prize winners and IQ. At least to an extent that makes Feynmans alleged score of 125 highly doubtful.

  8. October 24, 2012

    Well, I guess correlation isn’t causality and besides, 125 isn’t low by any means. It’s simply not super high.

    – Greg

  9. March 27, 2013

    Really fantastic article. Thanks very much for this history and your insights. What I’m struggling with, however, is the ending of your piece, and the idea that our “choices are our own.” I’m a big fan of Eli Paisier’s “Filter Bubble” and feel in one sense our decisions are becoming less out own all the time. While I love the idea of pilot’s having the collective wisdom of many to guide their piloting, my first thought is what happens on the flight where the tech fails – what if this particular pilot missed the “landing” training?

    Not being contrarian on purpose, mind you. Your article is really insightful and helpful. But the paradigm I’m struggling with is the notion (not saying you said this-this is my thought) that “people are better/more human than machines which is why we don’t have to worry about AI.” Machines don’t believe that, as they don’t have sentience. But we trust them to make decisions (via algorithms) about book preferences, where to drive, who to date (dating services etc) and so on. The collective wisdom of some of these systems is there in some examples (like the pilot idea). From my understanding, how google parses queries (thinking of “Filter Bubble”) varies from user to user based on cookie data/behavior.

    Algorithms are now creating algorithms. Since we’re in the infancy of AI this does t seem like a big deal. But ethics aren’t typically part of these choices, which means our improved algorithmic paradigms are, by definition, lacking human context of ethics. So we’re training machines to not necessarily have the more human syntax/context that would keep them humanistic as we surge forward with technology many times because we can versus because we should. I don’t doubt, by the way, that Crowdsourcing patterns or other systems do include ethical considerations, but from my research, AI is often focused on pattern recognition and analysis over anything that could be called driven by ethics.

    So my point is I think we should all think hard about AI and stop saying (not that you are, mind you) “but we’re human and can experience art” etc. Machines don’t give a crap about that are are automating the world and to a large extent, our decisions, ability to experience serendipity, or even exposure to things we may find unpleasant. Some struggles should be automated (arranging my calendar). Others, not so much.

    And again – thanks for such a well thought out piece.

    John C. Havens
    Founder, The H(app)athon Project

  10. March 27, 2013


    You make excellent points and I, like most people thinking seriously about this, am struggling a bit.

    In any case, what I’ve come up with is this: There are some things a computer will never do, such as striking out in little league, getting married, seeing their child born, etc. It is from these personal experiences that we form desire and intent (I discussed this idea at more length here).

    So my point about choice is that we choose how we use technology just as a pilot chooses to go on autopilot or fly manually. Where the difficulty comes isn, I think, is that all too often this isn’t a conscious choice (Kevin Kelly discussed this at length in What Technology Wants)

    So, back to your “filter bubble” example, I think we have to take some responsibility for it. If you know it exists and feel its a problem, you can work to seek out other influences. Much like a person who lives in a small town (or someone in a big town, for that matter) can seek out geographically disparate influences.

    In other words, it is our choice whether we want to go off autopilot and when.

    I hope that suffices for an answer. Thanks for such an insightful question.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS