Skip to content

Can We Teach Algorithms Right From Wrong?

2016 November 20
by Greg Satell

In the Nicomachean Ethics, Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “what then do we mean by the good?” That, in essence, encapsulates the ethical dilemma. We all agree that we should be good and just, but it’s much harder to decide just what that entails.

Since Aristotle’s time, the issues he raised have been continually discussed and debated. From the works of great philosophers like Kant, Bentham and Rawls, to modern day cocktail parties and late night dorm room bull sessions, ethical questions are endlessly mulled over and argued about, but never come to a fully satisfying conclusion.

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining new importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? This is no longer a purely theoretical question.

Designing A Learning Environment

Every parent worries about what influences their children are exposed to. What TV shows are they watching? What video games are they playing? Are they hanging out with the wrong crowd? We try not to overly shelter our kids, because we want them to learn about the world, but don’t want to expose them to too much before they have the maturity to process it.

In artificial intelligence, these influences are called a “machine learning corpus.” For example, if you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. Eventually, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry, as in the case of Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform. In under a day, Tay went from being friendly and casual  (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Francesca Rossi, a researcher at IBM, points out that we often encode principles regarding influences into societal norms, such as a what age a child can watch an R-rated movie or whether they should learn evolution in school. “We need to decide whether to what extent the legal principles that we use to regulate humans can be used for machines,” she told me.

However, in some cases, algorithms can alert us to bias in our society that we might not have been aware of, such as when we Google “Grandma” and see only white faces. “There is a great potential for machines to alert us to bias,” Rossi notes, “We need to not only train our algorithms, but also be open to the possibility that they can teach us about ourselves.”

Unravelling Ethical Dilemmas

One thought experiment that has puzzled ethicists for decades is the trolley problem. Imagine you see a trolley barreling down the tracks and is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

Ethical systems based on moral principles, such as Kant’s Categorical Imperative, (“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law”) or Asimov’s first law, (“A robot may not injure a human being or, through inaction, allow a human being to come to harm”) are thoroughly unhelpful here.

Another alternative would be to adopt the utilitarian principle and simply do what results in the most good or the least harm. Then it would be clear that you should kill the one person to save the five. However the idea of killing somebody intentionally is troublesome, to say the least. While we do apply the principle in some limited cases, such as in the case of a Secret Service officer’s duty to protect a President, those are rare exceptions.

The rise of artificial intelligence is forcing us to take abstract ethical dilemmas much more seriously, because we will need to encode moral principles into software concretely. Should a self driving car risk killing its passenger to save a pedestrian? To what extent should a drone take into account the risk of collateral damage when killing a terrorist? Decisions will have to be made.

These are tough questions, but IBM’s Rossi also points out that machines may be able to help us with them. Aristotle’s teachings, often referred to as virtue ethics, emphasize that we need to learn the meaning of ethical virtues, such as wisdom, justice and prudence. So it is possible that a powerful machine learning system will provide us with new insights.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will not only have to decide what ethical principles to encode in artificial intelligences, but how they are coded. As noted above, for most people, “Thou shalt not kill” is a strict principle, but for a secret service agent it’s more like a preference and greatly affected by context.

There is often much confusion about what is truly a moral principle and what is merely a cultural norm. In many cases, as with LGBT rights, societal judgments with respect to morality change over time. In others, such as teaching creationism in schools or allowing the sale of alcohol, we find it reasonable to let different communities make their own choices.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

The issue becomes especially thorny when algorithms have to make decisions according to conflicting professional norms, such as in medical care. How much should cost be taken into account when regarding medical decisions? Should insurance companies have a say in how the algorithms are coded? It is likely that different communities will make different choices.

This is not a completely new problem. For example, firms operating in the U.S. need to abide by GAAP accounting standards, which rely on strict rules, while those operating in Europe follow IFRS accounting standards, which are driven by broad principles. We will likely end up with a similar situation with regard to many ethical principles in artificial intelligences.

Setting A Higher Standard

In speaking to AI experts, it became clear that we will need to set higher standards for artificial intelligences than we do for humans. We do not, as a matter of course, expect people to supply a list of influences and account for their logic for every decision that they make, unless something goes horribly wrong. But we will require such transparency from machines.

“With another human, we often assume that they have similar common sense reasoning capabilities and ethical standards that we have. That’s not true of machines, so we need to hold them to higher standard,” IBM’s Rossi says. Josh Sutton, Global Head, Data & Artificial Intelligence at Publicis.Sapient, agrees and argues that both the logical trail and the learning corpus that lead to machines decisions need to be made available for examination.

However, Sutton also sees how we might also opt for less transparency in some situations. For example, we may feel more comfortable with algorithms that make use of our behavioral and geolocation data if human access is restricted. It’s much easier to encode strict parameters into a machine than it is to do so in a human.

Clearly, these issues need further thought and discussion. Google, IBM, Microsoft, Amazon and Facebook, have recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government and industry to advance understanding and promote best practices. Yet that is merely a starting point.

As pervasive as artificial intelligence is set to become in the near future, the responsibility rests with society as a whole. Put simply, we need to treat the standards by which artificial intelligences will operate just as seriously as those that govern our legal systems and how we educate our children.

It is a responsibility that we cannot shirk.

– Greg

A previous version of this article first appeared in Harvard Business Review.

2 Responses leave one →
  1. Dwight permalink
    November 21, 2016

    Greg,
    Great article, and one that highlights a great challenge. Having a small background in ethics in college, I appreciate your view on the complexity of the topic. Just simple reading on the topic, I think expands ones thinking. Maybe introduce this as part of a curriculum on critical thinking at younger ages? One interesting questions is does one ethical framework fit all scenarios in the AI world?

    An interesting current ethical question is related to driver-less cars, and similar to the trolley problem. If a driver-less car detects (in factions of a second) it will have a multi-car accident, does it take action to favor its own passenger(s) or the individuals in the other cars? The problem may be coming sooner than some expect.

  2. November 22, 2016

    Great points. Thanks Dwight.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS