Skip to content

Technology’s Moral Crisis

2017 February 26

On July 16th, 1945, when the world’s first nuclear explosion shook the plains of New Mexico, J. Robert Oppenheimer, who led the project, quoted from the Bhagavad Gita, “Now I am become Death, the destroyer of worlds.” And indeed he had. The world was never truly the same after nuclear power became a reality.

In the years that followed, it became fashionable for many scientists to become activists. In 1955, Albert Einstein and the philosopher Bertrand Russell issued a manifesto that highlighted the dangers of nuclear weapons, which was signed by 10 Nobel Laureates. Later, a petition signed by 11,000 scientists helped lead to the Partial Test Ban Treaty.

Today, even small businesses are gaining access to advanced technologies like artificial intelligence and gene editing and that’s going to put managers in an unusual position. Much like nuclear energy, these are incredibly powerful, but not under the control of governments or, in fact, any large institution. This time, we all need to hold ourselves responsible.

Can Algorithms Learn Right From Wrong?

On March 23rd, 2016, Microsoft released it’s artificially intelligent chatbot, Tay, on Twitter. Although it was designed to be friendly and outgoing, within 24 hours human interactions had transformed it into a misogynistic racist spewing Nazi propaganda. Embarrassed, Microsoft quickly took Tay down and apologized.

As the science of artificial intelligence continues to advance rapidly, it’s becoming clear that intelligent machines raise a variety of ethical dilemmas we’re not quite prepared for. Should a self driving car risk killing its passenger to save a pedestrian? Do decisions made by robots require greater transparency than those that are made by humans?

As recently as a decade ago, these questions would have seemed like science fiction. Today, however, even small businesses can access powerful artificial intelligence technologies from the likes of IBM, Microsoft and Amazon. Other firms, such as Mendix, offer platforms that allow even non-technical managers to implement these technologies with just a few clicks.

Accenture’s Chief Technology & Innovation Officer Paul Daugherty told me, “Most companies are just starting to connect the dots on ethical AI, but it’s increasingly becoming an issue. We have to design systems with an eye to accountability, transparency, fairness, honesty and how humans fit into the system.”

“This is within our control, as business leaders, to get right” he continued, “and that’s exactly what we’re implementing within our own systems as well as with the training programs we develop for our employees and our clients.”

Hacking The Biological Code

In 1972, Janet Mertz, then a graduate student at Stanford’s biochemistry department, presented her plans to insert genes from the SV40 virus into the DNA of the bacterial E. coli. There was an immediate uproar. Nobody knew what transferring genes between organisms could unleash and a moratorium on certain types of research was soon called for.

Today, those early fears seem quaint in light of a new technique, called CRISPR, that can edit genes at will. These concerns intensified in April 2016 when Chinese researchers announced that they had genetically modified cells in human embryos, raising fears of both “designer babies” as well as runaway genetic diseases caused by errors in gene editing.

What makes the technology even more potentially troublesome is that it is simple enough for amateur biologists to use, raising the possibility that in the future we will have to worry about bioterrorists in the same way we do criminal hackers today. Malicious code in our computers is bad enough, what will we do when we have to worry about malicious code in our bodies?

“CRISPR is accelerating everything we do with genomics,” says Megan Hochstrasser of the Innovative Genomics Initiative at Cal Berkeley, “from cancer research to engineering disease resistant crops and many other applications that haven’t yet come to the fore. Probably the most exciting aspect is that CRISPR is so cheap and easy to use, it will have a democratizing effect, where more can be done with less. We’re really just getting started.”

Here again, we’re in mostly uncharted territory. Major corporations, such as Monsanto, have fueled enormous controversy from relatively simple — and apparently safe — genetic engineering. Now that smaller firms, without the resources of a large enterprise, will be able to access even more powerful technology, what are they opening themselves up to?

The Economic Fallout Of Automation

Ever since General Motors placed Unimate, the first industrial robot on its assembly line in 1961, our economy has become increasingly automated. Today, it is not only physical labor that is being replaced, but routine cognitive tasks as well. One study found that 47% of jobs in America are at risk of being automated.

There is also increasing evidence that technology is contributing to income inequality and exacerbating social divisions. Martin Ford, author of Rise of the Robots warns that if current trends persist, they may lead to a new form of techno-feudalism, where the affluent and tech savvy live in exclusive automated enclaves to separate them from the poor, huddled masses.

Yet Josh Sutton, who leads the data and AI practice at Publicis.Sapient believes that many of these concerns are overblown. “I actually think cost is the worst reason to invest in automation, because the companies that are able to thrive 10 or 20 years from now will not be the ones who cut costs but those that transform business models. You have to be looking to bring your organization up a level.”

“The best way to roll these technologies out and gain adoption is to create a great experience and that’s the same internally and externally,” he continues. “You have to make your technology as compelling to your employees to as you do to your customers. Automation, when properly understood, does not replace humans, but extends them.”

The truth is that technology doesn’t change your culture, it reveals your culture. If you have a culture that values employees, you won’t trade them in for a robot that doesn’t take vacations or ask for a raise, because you expect them to add to your business rather than just to perform tasks efficiently.

Rising To The Challenge

We are entering an era now very similar to the one that Oppenheimer and The Manhattan Project ushered in decades ago, except it is not only governments and large institutions that will be able to access tools of almost unimaginable power but, theoretically at least, every enterprise on the planet. This will be a challenge for even sophisticated organizations, but especially small and medium sized businesses

Today, ordinary teenagers have more access to computing power and information than someone at a large institution did a generation ago. Amateurs are gaining far more power to manipulate the genetic code than the primitive experiments that led to the partial moratorium in the early 1970’s. What used to be science fiction is quickly becoming business reality.

And so, while we should all be excited about the incredible opportunities that lie before us, which are real and important, we also need to be mindful, as Oppenheimer was, that technology has a dark side. It is never enough to simply charge boldly forward, we also need to stop every once in a while and think about where we are going.

– Greg

 

An earlier version of this article first appeared in Inc.com

2 Responses leave one →
  1. Michael Breeden permalink
    March 5, 2017

    Interesting how you match three technology fields to morality – nuclear weapons, genetics and AI / automation. You do succinctly describe the technologies and mention the economic fallout, but I find a description of morality rather lacking. What is morality? You seem quite smart, informed, concerned and forward looking. Surely you can describe morality. If you can’t, I really doubt the specialists in those technologies are going to be able to. If they can’t, how can you expect them to act morally, especially as the problems they face make Mr. Oppenheimer’s problem look simple? Don’t feel bad if you aren’t clear on what morality means. Even the current crop of philosophers admit to flaws in any model of morality they have. Maybe, just maybe, what we need is not a new technical or managerial knowledge, but an understanding of morality. Oh yah, that’s my next book. You can read its prequel now – Genetics For A New Human Ecology. I just haven’t decided whether to call it Strategy For A New Human Ecology, which would be easy to understand or its real name – Morality For A New Human Ecology. (After I figure out how to make a multi-threaded build of GhostScript and how to use Box in a useful way.)

  2. March 5, 2017

    I mean morality in the classical, Greek sense. As a basis to guide action.

    Good luck with your new book!

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS