Skip to content

How Artificial Intelligence Is Forcing Us To Answer Some Very Human Questions

2019 November 3
by Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original And Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers are suing, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon recently had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem Of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the  trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

– Greg

 

Image: Pixabay

5 Responses leave one →
  1. November 4, 2019

    Oh, that’s right up my alley… Do you know that in the 1930s a guy wrote a description of morality that would apply to machines… It’s quite different than humans … They do reproduce quite differently and at its root morality is about reproduction even more than individual survival.
    “This is how humans and machines coevolve.” My buddy loves to point out that machines evolve infinitely faster than humans. He proposes that humans quietly replace ourselves with our “machine children”. He’s serious.
    The best thing about studying morality and moral instincts is that they are organic, they were created by evolution. Logic and reason may not apply to them any more than other outcomes from evolution.
    Hey! Another good question is what are humans going to do it it gets to the point of George Jetson only working 2 hours a week. Whoops, we’re already getting a bit dangerously close to that if you look at modern occupational reality.
    Think of how difficult it will be to make a new world for humans. In the past, nature has always done that. Now we can’t rely on nature. We have to do it ourselves. It’s going to take some real brilliance and devotion to do that.
    I know. Some really really smart cookie should spend a few decades figuring out a Strategy For A New Human Ecology 🙂

  2. November 6, 2019

    So maybe there is a new type of people who are mediators between regular people and technologies?

  3. November 7, 2019

    It’s an interesting point and true, I think.

  4. November 9, 2019

    Hi Greg, You make many great points in this article. I hope more people read it.
    As you may know, I invested 24 years working in a great, albeit engineering dominated culture at HP. Initially what appeared as a foreign acronym-laden language and total focus on logic inspired me to examine where there are gaps between technology and human relevance across the stakeholder ecosystem. One well respected engineer in HP said “Our customers are just like us” so as we were briefing our agency, he went as far as to say “here is what the ad should look like – large picture of the product, its name in the headline, a specs list and a benefits list. You can play with the subhead””.

    Talking about biases, anyone familiar to how agencies work, know you don’t do that. We saw this engineer as a valued partner just sharing his beliefs. So we ran his ad, his way and we did our ad, our way and evaluated which got the best response.
    His way assumed all customers were engineers but most failed to get the value context so response was low. Our ad focused on the value simplified per the usability being like riding a bike. We scored high response rates.

    By respecting this engineer, or anyone with different thinking, he soon became an advocate. Attention to others, reasoning, trust, relevance & collaboration are increasingly vital as we face into the need to transform around greater attention to stakeholder values versus the bottom line, short-termism that dominated the last
    century.

    My points to add credence & value to your article re: AI & Humanity are as follows:
    1. Way too many jobs are defined as repetitive work with time pressures. A good AI fit.
    2. Digital device addiction is causing an erosion of soft skills, reasoning, creativity, relationships and genuine attention to others.
    3. This is happening at a time when our enabled humanity & shared values define the path to earned significance, which then translates to far greater success for all.
    4. Given the propensity of left brain dominated cultures that are great at product and regulatory disciplines, they also inadvertently create closed, internally focused cultures.

    So to the suggestion made by Roman, my answer as long proven by my work for HP and others is to assure you have more right brainer’s & conscious designers focused on how best to create trust, relevance, objectivity as vital to earning the highest stakeholder regard.

    I am excited to see all that we learn as designers as now one of the most relevant additions to keep orgs genuine rather than just making them look good on a false foundation.

    Keep up the great work Greg as a better world is ahead.

  5. November 10, 2019

    Thanks Bill! I always look forward to hearing your views.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS