Skip to content

How The Machines Are Learning To Take Over

2013 April 7

It’s fun to look at old pictures of Bill Gates from back when he was a boy genius.  Unlike Mark Zuckerberg, he arrived on the scene looking very much like the nerd he was; big glasses and a sheepish grin, like he’s just happy, albeit a bit embarrassed, to be invited to the party.

He’s grown up a lot since then.  Years of success and media training have given him a quiet confidence.  He speaks from the heart about issues he is devoted to, like education and eradicating malaria.

And technology has grown up with him.  The bulky green fonts and command lines have been replaced by far more natural interfaces.  Computers are now able to recognize speech, text and even gestures.  As they continue to learn, they will become intelligent enough for most human tasks, which will change how we work forever..

A Petty Academic Squabble

It all began in the early 20th century with one of those obscure academic squabbles that usually don’t amount to much.  It had to do with statistics and whether humans have free will.  A mathematician and theologian named Pavel Nekrasov argued because human activity follows same mathematical laws as independent events, then free will must exist.

Andrei Markov, one of the great mathematicians of the day, thought Nekrasov’s argument was hogwash.  After all, he argued, just because independent variables follow a certain mathematical law doesn’t mean that conscious, directed activity can’t do so as well.

To prove his point, he did a mathematical analysis of Eugene Onegin, Pushkin’s famous novel in verse, and showed that the combinations of vowels and consonants follow the law of big numbers as well.  A vowel, will most likely be followed by a consonant and vice versa, in proportions that become more stable as you analyze more text.

And so, Markov succeeded in showing that dependent variables could yield distinct probabilities.  It was the kind of interesting, but relatively useless insight that academics specialize in and it remained obscure for most of the 20th century.

Recently, however, Markov models have taken center stage in how machines learn.

Learning to Decipher Patterns

Today is a beautiful, sunny day and, chances are, it will be tomorrow as well.

You see weather, much like text, is a dependent system.  If it’s sunny today, it’s more likely to be sunny tomorrow, if it rains on Saturday, I shouldn’t bank on a nice time at the pool on Sunday either.

Brian Hayes (to whom I am indebted for the narrative above) gave an excellent example of how we can adapt Markov’s insights to predict the weather with this chart he included in a recent article in American Scientist. (Click to enlarge).

 Weather Markov Model

If you used this scheme to predict the weather it would be reasonably accurate. Professional forecasters often do apply some version of a Markov model in order to get a baseline and then incorporate other factors, such as barometric pressure, to improve accuracy.

Once you begin thinking about it, you start seeing Markov models everywhere.  What I’m doing now will affect what I’ll do next.  What I say now will affect what I say next. Human behavior is, when you get down to it, like one big Markov chain.  Both necessity and habit make us highly predictable.

What’s more, just like weather forecasters, we can augment Markov models by adding information as it comes in, adapting our analysis through a technique called Bayesian inference.  Interestingly, this is exactly what we humans learn to do as we mature, to adapt to the habits of others in our lives.

Infinite Monkeys Hard at Work

There is an old literary concept, called the Infinite Monkey Theorem, which states that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, producing masterpieces like Pushkin’s Eugene Onegin would be more a matter of curation than creation.

Today, with data centers running hundreds of thousands of processors which can perform millions of calculations per second, we are beginning to experience a real life version of the Infinite Monkey Theorem.  Companies like Narrative Science are able to produce coherent documents from raw data this way and Brian Hayes has built a rudimentary program that does a passable job.

And it’s not hard to see how the process can be reversed.  If mindless processors can be made to create patterns, they can learn how to recognize them as well.  More sophisticated forms of Markov models are what drive pattern recognition technologies such as Apple’s Siri and Microsoft’s Kinect.

The Education of an Algorithm

Recognizing patterns is one thing, understanding meaning is another.  A toddler begins to learn language by identifying phonemes – elemental units of language  – and eventually is able to form words.  However, it takes years of exposure to language for them to learn to talk and even more to be able to read.  Humans can spend a lifetime deciphering meaning in a particular field.

Computers, however, have far fewer limitations.  Their capacity, for practical purposes, is almost infinite (although somewhat constrained by budgetary concerns). Consequently, they can learn at superhuman speeds.  IBM’s Watson computer can reportedly analyze hundreds of millions of documents in seconds.

For example, researchers at IBM taught their algorithm to translate between French and English by exposing it to proceedings of the Canadian Parliament, which by law must be produced in both languages.  This allowed them to connect not just words, but entire phrases and even slang.  It would take a year for a human to sit through it all, but a computer can do it wihout breaking a sweat.

Others, such as Mattersight, a company that uses artificial intelligence to analyze and improve call center operations, uses a more human centered approach.  They have trained analysts check the computer’s work and teach it to improve over time.  Researchers at Cornell have recently developed algorithms that can learn by merely observing human behavior.

Much like a young Bill Gates, our machines are learning to be more human.

The New Learning Organization

For decades, management theorists have been talking about how organizations need to continually learn by eschewing the traditional command and control approach in favor of empowering their employees.

Today, as computers are beginning to perform legal discovery, making medical diagnoses and even evaluating creative work such as music and screenplays.  In other words, they are learning many of the same things that people do, except they do not get tired or sick, never ask for a raise and when they get too old to function effectively, their hardware can be replaced.

This of course, presents a dilemma.  How can organizations empower their people at the same time they outsourcing their jobs to algorithms and microchips?

The answer is this:  Effective professionals, rather than focusing on building skills to recognize patterns and take action, will need to focus on designing the curricula, to direct which patterns computers should focus on learning and to what ends their actions should serve.

– Greg

6 Responses leave one →
  1. April 7, 2013

    What a great informative post. I learned good stuff here and I’m still thinking about other stuff. As an internet marketer, I can’t wait until the computer can automatically get me on to page one for my keywords in the search results. The computer will recognize the patterns and take action if I tell it what the end goal is. I’m gonna be rich!

    Thanks again!

    Fred Tappan

  2. April 7, 2013

    Glad to hear it Fred!

    – Greg

  3. April 8, 2013

    Greg,

    Very imressive post.

    If you augment your cognitive artifitial intelligence approach with agent-based collaboration model then you will get the self-developing ecosystem.

    Local fluctuations in agent relationships will cause the instant structurisation based on automatical choosing the appropriate patterns. If parameters will go out of determined limits then the control and making decision will be provided manually…

    Sergei

  4. April 8, 2013

    Yes. Artificially intelligent systems work best when some emergent behavior is built in.

    – Greg

  5. October 13, 2014

    Machines also need to be taught how to learn. Marketers, not data wonks, need to teach computers how to learn. What type of data should they be looking at? Which data should they be looking at? How often should a correlation occur before the machine recognizes it as a pattern? How strong should those correlations be?

    These are things that rely on intuition, and asking marketers to answer these questions forces them to quantify their intuition and gut feeling. Marketers must transfer the parameters of their intuition and gut feelings to machines which can make decisions programmatic.

  6. October 13, 2014

    Good points. Thanks Desmond.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS