Skip to content

The Problems Of Progress

2016 May 1
by Greg Satell

Jules Verne, the 19th century science fiction writer, made a number of predictions, like submarines, space travel and even newscasts, that turned out to be accurate. His visions of the future were so vibrant that many inventors and scientists in the 20th century took inspiration from his work.

Yet many of our most difficult challenges couldn’t have been imagined even by a genius like Verne. The obesity epidemic, climate change and the economic and health issues that come with longer life spans wouldn’t have made any sense to a 19th century audience, when progress meant more to eat, larger industry and less mortality.

In much the same way, some of the thorniest problems we’ll have to face will come from the unintended consequences of advances we make today. What will make these so difficult to overcome is that they are cannot be solved in a research lab or a think tank, but in the public square. Unfortunately, we haven’t really even begun to start thinking them through.

When Is It Okay To Edit Genes?

In the 1850’s, an obscure German monk named Gregor Mendel began to experiment with pea plants. He tracked specific traits through several generations and, with amazing diligence and no lack of luck, was able to derive rules for heredity. It was an accomplishment that stands, even today, as one of the great scientific discoveries in history.

Still, no one really noticed at the time. It wasn’t till a half century later that researchers completed similar studies and uncovered Mendel’s work. Fifty years after that, Watson and Crick made their famous discovery of the structure of DNA. In 2003, roughly 150 years after Mendel’s performed his now-famous experiments, the human genome was finally mapped.

These discoveries have been hailed as great achievements—and rightly so. They are pointing the way to a new age of medicine, especially with respect to cancer, and have the potential to save untold millions of lives. However, as Scientific American reports, our scientific prowess is now taking us into uncharted territory.

A new technique, called CRISPR, allows scientists to actually edit genes—even in the germ cells that produce offspring—and that is giving rise to a number of ethical dilemmas. Would we want to, for instance, eliminate Tay-Sachs in Jews or sickle cell anemia in those of African descent? What about genetic dispositions to other diseases?

For that matter, what actually constitutes a “genetic defect?” Should we correct for nearsightedness or learning disabilities? What about sexual preference? Where do we draw the line?

Should Robots Fight Wars?

The first industrial robot, called Unimate, was installed on an assembly line at General Motors in 1961. Since then, robots have become highly integrated into our economy. They do dangerous jobs, like bomb disposal, as well as more prosaic ones, like running warehouses. There are now robots that do legal discovery and advise physicians. Some even write songs.

Robots are also increasingly being deployed on the battlefield. From the famous Predator and Reaper drones that carry out attacks on terrorists in remote places, to land based ones, like iRobot’s PackBot, Boston Dynamics Big Dog and Vecna’s BEAR. The use of battlefield robots has become so pervasive, in fact, that soldiers have bonded with them, giving them nicknames, awards and even holding funerals for them.

The use of robots has been invaluable. They are able to dangerous jobs, carry cumbersome gear and get to places that human soldiers can’t. Simply put, they save lives. However, as their use and capability expands, a number of troubling issues arise. Most importantly, how much autonomy should they be given?

For example, Vesna’s BEAR (which stands for Battlefield Extraction-Assist Robot) is designed to retrieve wounded soldiers and transport them to safety on the battlefield. That requires the same kind of decision making skills required to clear a building of hostile fighters. Again, where do we draw the line? Should we allow robots to make decisions about killing humans?

What Is The Meaning Of Work?

Throughout history, technology has improved human lives immensely. in 1900, average incomes in America amounted to less than $500 and life expectancy was a mere 46 years. Life was hard, brutish and short. The life of an average person today would have seemed like absolute nirvana to people back then. Today, Dickensian sweatshops are few and far between.

Yet progress comes at a cost. Research by MIT’s David Autor indicates that increasing automation polarizes the workforce and leads to rising inequality. However, the fault line is no longer between blue and white collar workers, but routine and non-routine tasks. So while analysts and wedding planners have prospered, bookkeepers and travel agents have not.

That means that we’re going to have to seriously reimagine how we educate our kids. Focus on basic literacy tasks, like spelling and long division, will need to give way to more critical thinking. Also, as teamwork becomes more important than individual contribution, social skills are beginning to trump cognitive skills.

So the nature of work has changed drastically from a century ago and will continue to evolve. Just as we no longer value physical labor, in the future we will have to come to grips with the fact that skills we value now—such as the ability to retain information and analyze data—will be in less demand as machines take over many cognitive tasks.

The upshot is that we will need to learn how to collaborate more effectively—with humans as well as machines—and that is means we are going to change how we train, manage and compensate people.

The Future Is All Too Human

We tend to think of technology as separate from ourselves, mostly relegating it to the background as we focus on the things that matter to us most, such as family, friends and work. Too often, we fail to recognize that technology changes us. We co-evolve with it and make adjustments that we aren’t even aware of.

Yet it’s clear that we can’t simply sit back and watch as technology advances. Decisions have to be made. Will we edit genes to cure disease and enhance our abilities? Should we deploy robots to kill other humans? How much autonomy will we give them? What is the meaning of work when the bulk of physical and cognitive tasks are done by machines?

Clearly, these are not questions that can be solved by algorithms and test tubes. The truth is, the future is all too human. We are still in control of our destiny, perhaps more than ever before. That’s why it’s absolutely imperative that we take an active role in choosing what kind of world we want to live in.

Even now, technology doesn’t determine our future. Only we can do that.

– Greg



3 Responses leave one →
  1. Kaythi Aung permalink
    May 2, 2016

    From the ethical point of view, robots should be allowed to make decisions for killing humans only when robots are invented to have ability of knowing for finding a partner and nurturing human babies.

  2. May 2, 2016

    Interesting idea! What if there were childbearing and rearing simulations embedded in their software?

    – Greg

  3. Gödel permalink
    May 7, 2016

    It reminds me of Huxley’s famous work “A brave new world”. I think someday robots would make all the practical tasks while humans just concentrate in theorethical work, and eventually people will be totally replaced with robots, since human ontology will be obsolete and inefficient. Could be.


    By the way, “Tonto” means “dumb” in spanish.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS