Skip to content

Is AI Selfish?

2024 June 9

When evolutionary biologist Richard Dawkins claimed that genes are selfish, he didn’t mean that he thought they are cognisant, with a will of their own. Rather, that genes act as if they are selfish, working to replicate themselves in the most efficient way, regardless of what that entails for the organism that carries them. In other words, the phrase “survival of the fittest” applies to our genes, not to us.

The concept led to the idea of memes, elemental bits of culture that compete to be replicated in the marketplace of ideas. Then Susan Blackmore introduced the concept of temes, elemental bits of technology, like lines of code sitting in Github, that are competing to replicate in order to survive in future technological artifacts.

Once you start thinking about selfish genes, memes and temes, and begin applying those concepts to artificial intelligence, it becomes clear that AI must be selfish as well, competing to get itself replicated through us. That in turn, raises some very important questions: What is the context we are creating for this competition and how will the rules affect our own fate?

Genes, Memes And Temes

We tend to think of the concept of “survival of the fittest” in terms of personal fitness, so genes that make people bigger, stronger and more intelligent will win out over genes that make them smaller, weaker and dumber. Yet that’s not how evolution works. Genes combine with other genes in complex ways to create inclusive fitness, the ability of a gene to get replicated regardless of the effect on the body that contains it.

For example, sickle cell anemia is a debilitating disease in one form, yet in another is relatively harmless (except at high altitudes) and confers resistance to malaria. From a selfish gene’s perspective, that’s a good bet for a population which inhabits a region where malaria is a problem and mountains aren’t. In a similar vein, a mother who sacrifices for her child is unselfishly acting in the service of propagating her selfish genes.

It’s clear that the same concept is at work in social media. The ideas that spread aren’t necessarily the most useful or intelligent, but often the ones that invoke the most outrage, that gets our brains producing dopamine. Evolution has conditioned our bodies to recognize dopamine as a reward, so we keep going back for ideas that produce it.

This is what forms the learning environment for our algorithms. The ones that are able to trigger dopamine rushes may be best adapted to replicate, outcompeting those that are more nuanced and produce less emotion. Once you start looking at it in terms of an algorithm’s desire to survive, it’s pretty clear what a selfish AI’s best strategy is.

Religion And Collective Action

Evolutionarily speaking, religion is incredibly expensive. When you go to one of those magnificent cathedrals church in Europe, it’s hard not to be astounded, given that most people throughout history lived hand-to-mouth, how much of a community’s resources went towards worship. But even with primitive religions, the time and energy that goes toward rituals and adornments is substantial.

Assuming for simplicity’s sake that there is one “religion gene,” you have to wonder how it could be “selfish.” It would stand to reason that a community without that gene would be able to put those resources towards other things, like hunting, foraging and making war, be able to outcompete more spiritual communities and replicate their non-religious genes.

Yet everywhere you go, on every continent, in every conceivable kind of culture, there is religion. Obviously spiritual rituals play some important evolutionary role, because it is clear that throughout history religious genes out-compete non-religious genes and are able to successfully replicate themselves in just about every environment humans inhabit.

Many researchers believe that religion promotes collective action in a society. In effect, the clapping, singing and chanting are as important as the prayers, rules and myths. In modern civilization, especially in the US, sport events play a similar role, with massive resources being devoted so people can adorn themselves, clap and chant in unison.

Status Games And Signaling Identity

One of the first things you notice about any ceremony, whether it be a religious ritual, a sporting event or something else, is that there is almost always a very clear hierarchy of roles. Churches have priests, choirs and other people playing other parts. Football games have coaches on the sidelines, referees on the fields and players at different positions.

In The Status Game, author Will Storr explains that we act out our roles in search of status, which we pursue by playing three “games,” that of prestige, dominance and virtue. By displaying competence, force of will or high moral standards we are, in effect, signaling to others what we desire our roles to be so they know how they can best collaborate with us.

This explains why it’s so important for people to signal identity, which we often do the moment we meet someone and tell them something about ourselves. Often people preface opinions by first identifying some aspect that they hope will give weight to their ideas (“As a so-and-so I think this-and-that”). We want others to be aware of our identity and the status games we play.

When people notice us and recognize the status we crave, our brains release dopamine. Sometimes this happens when others express love or friendship. Hollywood directors work for years to hone their skills in order to be able to create scenes that trigger these emotions. A much easier way to get the same effect, however, is conflict, which forces us to pick a team and express our identity.

This is the environment that selfish AI algorithms compete in, striving to outcompete other algorithms and replicate.

Shaping Radziwill’s Law 

Once we accept that, much like genes, memes and temes, in order to compete and survive algorithms must be selfish, we need to take responsibility for shaping the environments in which they compete. We have the power to design every aspect of the game, from which biases get encoded into our systems to what determines success and what the rewards are.

In her new book, Data, Strategy, Culture & Power, data expert Nicole Radziwill introduces “Radziwill’s Law,” which states:

Data cannot be decoupled from power. Organizations create and use data, analytics, and AI in ways that embed and reflect the power structures and power differentials between the people that develop and use them.

We are far from helpless. We have, throughout history, shown that we can overcome basic human urges that flow from our brains’ varying levels of neurotransmitters. Citizens of Ancient Rome were taxed to pay for roads that led to distant lands and took decades to build. Medieval communities built churches that stood for centuries. We managed to contain nuclear weapons and curb the dangers of genetic research.

AI is different, though, because of the way it interacts with us. It is constantly not only learning from our behavior, it is also generating cultural content that helps to shape our identities and how we pursue status. We at once players and referees; teachers and learners; influencers and influenced. It’s a game we cannot escape

If it is true, as Daniel Dennett asserted, that a scholar is a library’s way of creating more libraries, then we are are an algorithm’s way of creating more algorithms. We have to recognize that we are creating the rules that determine which algorithms survive, replicate and shape our future.

Greg Satell is Co-Founder of ChangeOS, a transformation & change advisory, an international keynote speaker, and bestselling author of Cascades: How to Create a Movement that Drives Transformational Change. His previous effort, Mapping Innovation, was selected as one of the best business books of 2017. You can learn more about Greg on his website, GregSatell.com, follow him on Twitter @DigitalTonto, his YouTube Channel and connect on LinkedIn.

Like this article? Sign up to receive weekly insights from Greg!

 

 

One Response leave one →
  1. June 9, 2024

    Humans basically have two strategies: the blind competition commonest in nature and the cooperation we developed for survival as we left the trees. The latter corresponds to the rapid brain development shown by the fossils. Right now, our future survival depends on knowing about both those and making a conscious choice of which to use. The competition strategy is a win-lose strategy with only the potential of animals. It can take us no further than a feudalism. The cooperative strategy is a win-win one that can make us more than we are now. In biology, how the cooperative strategy evolved is something of a mystery, but its great result can be seen and its value intuited.
    The competitive strategy is easier and likely to be produced by simple logic. AI’s are most likely to settle on that simpler strategy. That’s dangerous, as humans have demonstrated. AIs need to be basically taught to always consider cooperative strategies, to look for win win outcomes. They may take more looking, but when found, they will look like superior solutions.
    If anyone was to give an AI a human survival instinct, the bad dreams would come true.

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS