There Are Things That Machines Can’t Do (And Never Will)
I recently went to an event that featured a panel of experts discussing the impact artificial intelligence will have on society. As the discussion was winding down, the moderator asked what humans could still do that today’s superpowered algorithms cannot. One of the panelists, a renowned neuroscientist, replied, “die.”
“Well that was morbid,” I thought. It’s also completely untrue. There are lots of things machines will never do. Machines will never strike out at a Little League game, have their hearts broken in a summer romance or see their children born. These things may seem incredibly prosaic, but they’re actually deeply consequential and far reaching.
As MIT’s Sandy Pentland has put it, “We teach people that everything that matters happens between your ears, when in fact it actually happens between people.” Collaboration, with humans and machines, is becoming a key to competitive advantage and that’s where we need to focus. As I wrote in Forbes a decade ago, the future of technology is always more human.
Borges, The Infinite Monkey Theorem And The Library of Babel
The Argentinian writer Jorge Borges had a fascination with concept known as the infinite monkey theorem. The idea is that if you had an infinite amount of monkeys pecking away at an infinite amount of typewriters, they would randomly create the collected works of Shakespeare and every other masterpiece ever written (or that could be written).
Borges took the concept further in his story The Library of Babel, published in 1941, which describes a library that contains books with all potential word combinations in all possible languages. Such a place would encompass all possible knowledge, but would also be completely useless, because the vast majority of books would be gibberish consisting of random strings of symbols.
While the Infinite Monkey Theorem and the Library of Babel were meant to be pure thought experiments, today’s Generative AI services like ChatGPT make the dilemmas they describe very real. Much like the monkeys, we can churn out an almost infinite amount of content at will and, much like in the Library of Babel, much of what’s produced is little more than mindless gibberish.
The more you spend time with these dilemmas, the more you notice that two inescapable problems emerge. The first is one of curation. Yes, we can ask our machines to produce 100 versions of an email, but ultimately we have to choose which version we want. The second is intent. We have to decide what we want to produce and why.
For example, I asked ChatGPT to produce a 300 word biography of me, which it did in seconds. The result was grammatically perfect and completely accurate, but it was not the biography I would write for myself. I could, of course, provide a better description of what I wanted, but once I think all that through I might as well write the whole thing myself.
The Enshitification Of The Internet
Another thing that came up during the panel is Cory Doctorow’s concept of the enshittification of the Internet, which is, in part, driven by the Infinite Monkey phenomenon. When you have so many creators pumping out so much material at minimal cost, most of it isn’t going to be very good and this deepens the curation problem.
The problem is magnified by the profit motive. Because of the curation problem, it’s hard to get noticed, so profit driven companies pay for the privilege of visibility, such as when sellers pay Amazon for better placement. We’re so constantly getting spammed it’s harder for us to see the things we’re really interested in.
Generative AI will likely make the problem even worse. As AI generated content floods the web, these large language models are increasingly learning from their own enshittified product, leading to an AI feedback loop and the possibility of model collapse. Once you have monkeys copying off of other monkeys, problems of intent and curation become an intractable hall of mirrors.
We can imagine a dystopian, machine solipsistic, future in which we are using AI to respond to emails that others have generated with their own AI systems. We no longer bother curating because, much like the mindless spam that fills up our inboxes today, we don’t even expect those messages to reflect any meaningful human intent. The messages continue to fly back and forth, while everyone ignores them.
Forming Intent Through Dialogue
Once we stop imagining AI to be a super genius in a box and begin to see it as a machine for curating infinite monkeys, something much more valuable begins to emerge—a tool to create dialogues for ourselves. Large language models, by definition, hold a multitude of perspectives. So rather than replacing us, we can use them as soundings board to help us create for ourselves.
When a new technology emerges, we tend to focus on old use cases, which was why early movies were performed on a stage. It took time to see that film allowed us to create realistic sets that made for much more powerful storytelling. In much the same way, people tend to use chatbots much like we use search engines—to input a query and get an answer.
But systems like ChatGPT have the ability to preserve context. So we can start out with a simple query and then, as we get answers back, we can interrogate those answers, ask for different perspectives from historical figures, varied demographic groups or fictional characters, the possibilities are nearly endless. Rather than using AI as a machine to spit out canned answers, we can use it as a tool to help us to explore possibilities.
As the dialogue progresses, we will begin to be able to refine our original intent, go in different directions and ask for feedback. We can use these simulated conversations to enrich our real ones, bouncing ideas that we can sharpen as we collaborate and serve other humans, to seek out purpose and meaning that machines cannot provide.
Becoming More Human
For the past century or so, the most reliable path to success has been the ability to retain information and manipulate numbers. That’s what gained entrance into the most prestigious schools and led to a career at a prestigious firm. Yet the reason that information processing has been so highly valued is precisely because humans are so bad at it.
Today’s super-powered algorithms can mine vast stores of information and then express that information in writing, images, even sound and film. So it shouldn’t be surprising that computers are taking over what were long regarded as high-level human tasks. Yet once a task becomes automated, it becomes commoditized and value shifts somewhere else.
The key to winning in the era of AI is not to try to compete with machines, but to become more human, to be a better listener, collaborator and to support other humans as they work to identify and pursue their own intentions and ambitions. In Supercommunicators, author Charles Duhigg explains how the most successful leaders learn to match and respond to others’ mind states.
In a similar vein, many scientists believe that in ancient times religion conferred an evolutionary advantage because the connections and community it built around spiritual life enabled collective action to pursue important projects. Today, Todd McLees has developed a human skills curriculum to help organizations achieve something similar—enable humans to serve humans by collaborating with machines.
The key to succeeding in an artificially intelligent world is not to learn more about machines, but to learn more about ourselves. The future of technology is always more human.
Greg Satell is Co-Founder of ChangeOS, a transformation & change advisory, an international keynote speaker, and bestselling author of Cascades: How to Create a Movement that Drives Transformational Change. His previous effort, Mapping Innovation, was selected as one of the best business books of 2017. You can learn more about Greg on his website, GregSatell.com, follow him on Twitter @DigitalTonto, his YouTube Channel and connect on LinkedIn.
Like this article? Sign up to receive weekly insights from Greg!
Image created by the infinite monkeys at Microsoft Designer