Why The Future Of Technology Is All Too Human
When Ray Kurzweil published The Age of Spiritual Machines in 1999, he predicted a new era of thinking machines that will meet and then exceed human intelligence. The idea, which seemed outlandish at the time, doesn’t seem so crazy anymore.
Today, computers are taking over the work of humans and it appears that we are entering a new industrial revolution. While this alarms some, many technologists point out that we’ve been through similar times of technological change and emerged better for it.
That’s true. We are far better off than we were a century ago, when nearly half of us worked on farms, few had electricity and life expectancy was less than 50, but we endured a century of strife, two World Wars and countless genocides to get here. The truth is that, much like in the industrial age, the great problems we face aren’t ones of technology, but culture.
A Short History Of Artificial Intelligence
The concept of artificial intelligence got it’s start at a conference at Dartmouth in 1956. Optimism ran high and it was believed that machines would be able to do the work of humans within 20 years. Alas, it was not to be. By the 1970’s, funding dried up and technology entered the period known as the AI winter.
Slowly, however, progress was made. Computers became increasingly able to do human tasks, such as character recognition, making recommendations on Amazon and organizing itineraries on travel sites. We didn’t see the algorithms at work, but they were there, computing on our behalf.
In a later book, The Singularity Is Near, published in 2005, Kurzweil illustrated the point in a cartoon with some posters on the wall showing what computers can’t do and others on the floor representing limits that have already been surpassed. If you look closely, you’ll notice that some of tasks still on the wall have already been automated.
Today, artificial intelligence is coming out of the shadows and just about every major tech company has an active artificial intelligence program. From Apple’s Siri to IBM’s Watson to literally hundreds of other applications under development, thinking machines are beginning to permeate our everyday lives.
The Machines Take Over
Like most new technologies, artificial intelligence first gained traction as a way to replace cheap labor. Robots have been working in factories for decades, so it shouldn’t be surprising that today we have lights out factories and warehouses that are fully automated, needing just a bare bones staff to monitor them.
Now computers are also doing jobs once thought of as not only innately human, but within the realm of educated professionals. MD Anderson, a world class cancer center in Houston, developed the Oncology Expert Advisor with IBM to work alongside doctors. In law offices, software is replacing thousands of man hours devoted to legal discovery.
Today, computers are even being deployed to assess creative works. Major record labels use software to determine the commercial viability of new songs and send them back to the artists with recommendations if they don’t pass muster. Movie studios also run screenplays through a computer program to evaluate their potential.
The economic potential of machines that can perform human tasks is staggering. Experts estimate that driverless cars alone might be worth trillions, but just as the benefits are real, so are the potential costs.
The Social Challenge
The obvious question that all of this raises is, if computers are doing the work of humans, what are all the people going to do? We work, after all, not just for bread, but for dignity and purpose. The automation of labor is nothing less than the great social dilemma of our generation.
The effects are already being felt. In a particularly disturbing essay, MIT’s Andrew McAfee shows that while productivity and economic output continue to rise, employment and household income have stagnated or even fallen. In other words, work doesn’t pay anymore, only ownership does.
This, probably more than anything else, is why income inequality is on the rise—and not just in the U.S., but across most developed countries. It’s a real problem because income inequality threatens social stability. When people believe that the social order benefits others rather than themselves, they feel little stake in it.
Multiple Intelligences
We tend to think of intelligence as a singular quality—people are smart, dumb or somewhere in between. However, many researchers, Howard Gardner in particular, argue that there are multiple intelligences and as billionaire investor Warren Buffett points out, particular skills are favored in particular times.
Take me as an example. I happen to have a talent for allocating capital. But my ability to use that talent is completely dependent on the society I was born into. If I’d been born into a tribe of hunters, this talent of mine would be pretty worthless. I can’t run very fast. I’m not particularly strong. I’d probably end up as some wild animal’s dinner.
Yet research shows that even more important than having the right skills for the right problem is the ability to apply a diversity of approaches. In a complex world, there is no ultimate wisdom. The more paths that are travelled, the greater the likelihood that we will come up with the best answer to a difficult problem.
Manoj Saxena, who runs IBM’s Watson Program, told me that they have taken exactly that approach to cognitive computing. Watson is not one computer, but a wide array of systems that each apply different methods, such as Bayesian nets, Markov chains and genetic algorithms. Watson reconciles the various results before it gives an answer.
And therein lies the way forward. The future does not belong to an ultimate form of intelligence, but the ultimate mix of skills. As a case in point, at a freestyle chess tournament combining both humans and machines, the winner was not a chess master with a supercomputer, but two amateurs running three simple programs in parallel.
From The Information Economy to The Age Of Connection
For the past century or so, the most reliable path to wealth has been the ability to process information. That’s what got you into the most prestigious universities and graduate programs, which led to a career at a prestigious firm. Yet, the reason that information processing has been so highly valued is precisely because humans are so bad at it.
So it shouldn’t be surprising that computers are taking over what we have come to regard as high-level human tasks. We did not evolve to optimize, but to survive and, perhaps most of all, to collaborate with others to ensure our survival. We are, after all, creatures of biology, not silicon.
As MIT’s Sandy Pentland put it, “We teach people that everything that matters happens between your ears, when in fact it actually happens between people.” So technology doesn’t mitigate the need for human skills, but it will change which skills are most highly valued.
Lynda Chin, who leads the artificial intelligence effort at MD Anderson, already sees how computers will change the medical profession. She notes that the less time doctors spend poring through endless research, the more effort they can put toward meaningful interactions with patients and imagining new approaches that push boundaries.
So the answer to our technological dilemma is, in fact, all too human. While the past favored those who could retain and process information efficiently, the future belongs to those who can imagine a better world and work with others to make it happen.
– Greg
HI
Thanks for the perspective Greg. This points out the even greater need for our young people to engage in critical thinking – something that has been somewhat lacking in recent years. To provide value in the future there will be machine operators (maybe) and those that elevate the solution and can deliver the intent !!!!!
Very true. Thanks Enzo.
– Greg
I thank we need more like artificial intelligence to inprove our own mind and tech as well
Good point! Thanks Smamantha.
-Greg