Skip to content

How Numbers Lie

2015 June 14

It’s easy to laugh off an academic squabble.  When overeducated combattants square off in an arena that most people don’t even know exists, few take notice.  Yet some reverberate outside the academic world and I suspect that Paul Romer’s assault on mathiness, ably summarized by Justin Fox at Bloomberg View, will be one of them.

The issue at hand is the tendency of economists to cloak ideology in obscure equations to give their views a false appearance of rigor.  Well, you might say, that’s what overeducated eggheads do, but seemingly practically minded business people have their own version of “mathiness.”

When managers say they are data driven and ROI focused they are usually more intent on professing a belief than delivering results. They are, essentially, accidental theorists, putting their faith in an abstract idea rather than engaging in any true analysis of cause and effect. Despite what many will tell you, numbers can lie and only fools follow them blindly.

The Engineering Of Efficiency

Frederick Winslow Taylor must have been a strange sight at the Midvale Steel Works. Unlike most factory foremen, he didn’t bark at his men to work harder and faster, but stood by with a stopwatch, pen and ledger, observing and timing their movements.  His aim was to find the one, best way to perform every task.

We now know Taylor as the father of scientific management, which later spawned the best practices and six sigma movements.  In the 1960’s and 70’s Wall Street got into the act, developing an efficient markets hypothesis that led to a capital asset pricing model (CAPM) to guide capital investments and the Black-Scholes model to mitigate risk.

The goal of these was to engineer optimized solutions to common business problems.  If, as Taylor preached, there was a “one, best way” of doing things, then managers using math could hone their industrial machines by identifying best practices and deploying them throughout their organizations.

Alas, this was as much “mathiness” as it was math.  Underlying all the numbers and complicated formulas were human assumptions.  These were not, in fact, “scientific,” but mere guesses that made the math simpler and more manageable.  Unfortunately, they were also very, very wrong.

The Mathematics of “Anything Can Happen”

The idea that results can be “engineered” is an attractive one.  It suggests that by knowing inputs we can predict, with a remarkable degree of accuracy, what outputs will be.  If true, then performance is mostly a matter of getting the data right.  With better measurement and analysis, we should be able to get better results.

Yet Benoit Mandelbrot cast doubt on this neat little story.  Although much of the time these models worked and past results did indicate future performance, sometimes they were far off the mark.  He argued that economic analysis was too dependent on “Joseph effects,” which supported continuity and neat models, but ignored “Noah effects” which created discontinuity and blew those same models to bits.

In a sense, he wasn’t telling anybody anything they didn’t know.  Statisticians had long been aware that no model is perfect, but disregarded stray points of data as “outliers” that could be largely ignored.  Yet Mandelbrot pointed out that the efficiency engineering models were failing.  Outliers like market crashes happened far more often than they predicted.

So while everyone else was preaching about the wonders of “scientific management,” Mandelbrot was arguing for the mathematics of anything can happen.  In his mind, it was the outliers—market crashes, innovations like electric cars and iPhones, world wars and the like—that determined the course of history.

The System Crashes

For most of his career, Mandelbrot was seen as an iconoclast to be listened to and then ignored.  Paul Cootner, one of the pioneers of financial engineering wrote that he forced his colleagues “to face up in a substantive way to those uncomfortable empirical observations that there is little doubt most of us have had to sweep under the carpet until now.”

Then he added, “but surely before consigning centuries of work to the ash pile, we should like to have some assurance that all of our work is truly useless.”  The message was clear: the train had left the station.  The dream of a clockwork universe was far too alluring— and there was far too much money to be made—to to stop it.

The financial crisis of 2008, just a few years before his death, largely redeemed Mandelbrot in financial circles.  The risk models of the financial engineers failed us all.  Unforeseen defaults in mortgage markets cascaded through system and reverberated throughout the entire economy.  Risk wasn’t being managed.  In fact, it was being stepped up.

Unfortunately, the lessons remain surprisingly unlearned.  Today’s managers, driven by data and focused on ROI, continually believe that, with just a little bit more effort and precision, that they can keep Mandelbrots “Noah effects” at bay and consistently engineer results.

Robustness And Resilience

As he describes in his new book, Team of Teams, when General Stanley McChrystal first took over Special Forces in Iraq, he presided over a magnificently engineered military machine. No force in the world could match their efficiency, expertise and effectiveness.  Yet, although they won every battle, they were losing the war.

The problem, he now explains, is that although his force had robust capabilities—they could perform any task they were given—they failed to be resilient in the face of unseen circumstances.  It wasn’t that they weren’t doing their jobs right, but that they weren’t doing the right jobs—and that was the Achilles heel of his elite force.

As McChrystal puts it, “In complex environments, resilience often spells success, while even the most brilliantly engineered fixed solutions are often insufficient or counterproductive.” Or, in business terms, they were performing to plan, but the plan itself was flawed.  It was based on assumptions that turned out not to be true.

And that’s the problem with mathiness.  For any endeavor, there is a simple model that should reasonably lead to good results.  However, every model includes assumptions and, unless we understand and account for those assumptions, the model will eventually blow up in our face.  Obscuring that reality with Greek letters and abstract symbols only compounds the problem.

So while mathiness conveys a certain authority, and the idea of “scientifically engineered” solutions sounds attractive, we should remember that science isn’t about certitude, but skepticism.  There is never a magic formula that can solve all our problems.  A leader’s job is to deal with uncertainty, not ignore it.

– Greg

6 Responses leave one →
  1. June 14, 2015

    Economics is the dismal science for a reason, and at times the science of statistics is tortured to make results fit a model even when they do not want to cooperate. Unlike atomic particles which in large amounts may appear as a wave, humans are different kinds of individuals who can act. Unlike particles their action may not rise from sense and may be influenced by many things including culture, environment and more.
    It is not that math does not inform, it is blindly following it off the cliff, or torturing it to make the point that leads to troubles.
    As always, the future comes in ways both predicted and unpredictable and we must adjust. Those open minded and flexible enough will be the first to ride the waves.

  2. June 14, 2015

    Great points. Interestingly, many of the models used derive from Einstein’s statistical analysis of particles in Brownian motion. They don’t always translate as well to real life.

    Thanks Robert.

    – Greg

  3. Glenn Stehle permalink
    June 15, 2015

    What a great post.

    It is germane to a conversation I am currently having with a correspondent about a recent artice published in the “MIT Technology Review:”

    Network Theory Reveals The Hidden Link Between Trade And Military Alliances That Leads to Conflict-Free Stability

    To wit:

    Here’s what Wikipedia has to say about the Santa Fe Institute in its entry on systems theory:

    Complex adaptive systems are special cases of complex systems. They are complex in that they are diverse and composed of multiple, interconnected elements; they are adaptive in that they have the capacity to change and learn from experience. The term complex adaptive system was coined at the interdisciplinary Santa Fe Institute (SFI), by John H. Holland, Murray Gell-Mann and others.

    Not all systems theorists, however, are so sanguine that complex systems, and especially our current global capitalist system, can be or are “adaptive.”

    For instance, in this interview with Benoit Mandelbrot, who was one of the pioneers of systems analysis, and who was also associated with the SFI, he asserts that:

    Tools have been deveoped which assume that changes are always very small. If one of them comes nothing bad happens. If several of them come together, very bad things can happen. And the theory does not take account of that. And the theory doesn’t take account of very large and sudden changes in anything. The theory thinks that things move slowly, gradually, and can be corrected as they change, where in fact they may change extremely brutally.

    Nicholas Taleb then elaborates on Mandelbrot’s point:

    Of all the books you read on globalization, they talk about efficiency and all that stuff, they don’t get the point. The network effect of that globalization, OK, means that a shock in the system can have much larger consequences.

    Another pioneer of systems analysis, Immanuel Wallerstein, says “our existing historical system is in the process of dying,” and that “there is a fierce struggle over what kind of new historical system will succeed it.”

    This notion that our current global system is in crisis or is dying is of course a far cry from the sublime conclusions the MIT researchers in the SFI link you sent me came to, who claimed that:

    Between 1820 and 1959, there were 10 times as many wars per year on average between each possible pair of countries than between 1960 and 2000 (see diagram above). So what has changed since the 1950s?

    Jackson and Nei argue that it is the formation of trade links between countries that has created the stability that has prevented wars.

    Jackson and Nei offer this graph to illustrate the fact that wars between states have diminished significantly in the post-WWII era:

    “Graph from MIT Technology Review”

    There is, however, an alternate explanation as to why there have been so few interstate wars in the post-WWII era. Here’s how Jonathan Schell puts it in The Unconquerable World:

    All the new [technological and scientific] inventions were used to the full in the Second World War, in which an estimated seventy million human beings were killed. But it was the last time. Even as the war was being fought, an instrument that would make such wars forever impossible was being prepared in the desert of New Mexico. Never has a single technical invention had a more sudden or profound effect on an entrenched human institution [the “total war system” as Schell calls it] than nuclear weapons have had on war…. The bomb revealed that total war was not an everlasting but a historical phenomenon. It had gone the way of the tyrannosaurus rex and the saber-toothed tiger, a casualty not of natural but scientific evolution, whose new powers, as always, the war system could not refuse. Its day was done.

    What the MIT researchers Jackson and Nei give us is what William I. Robinson, another renowned global systems theorist, would call an “uncritical” global systems analysis. Here’s how Robinson explains the difference between their “uncritical” global systems analysis and a “critical,” or “subversive” global systems anaysis:

    A critical global study (CGS) must take a global perspective, in that social arrangements in the twenty-first century can only be understood in the context of global-level structures and processes, that is to say, in the context of globalization. This is the ‘‘think globally’’ part of the oft-cited aphorism ‘‘think globally, act locally.’’ The perceived problematics of the local and of the nation-state must be located within a broader web of interconnected histories that in the current era are converging in new ways. Any critical studies in the twenty-first century must be, of necessity, also a globalization studies.

    But global-level thinking is a necessary but not sufficient condition for a critical understanding of the world. Transnational corporate and political elites certainly have a global perspective. Global thinking is not necessarily critical and is just as necessary for the maintenance of global capitalism as critical global-level thinking is for emancipatory change. If we can conceptualize a CGS then we should be able to conceive of a ‘‘noncritical globalization studies.’’ If a CGS is one that acknowledges the historical specificity of existing social arrangements, then a ‘‘noncritical globalization studies’’ is one that takes the existing world as it is. Such a non-critical globalization studies is thriving in the twenty-first-century academy. It is a studies that denies that the world we live in—twenty-first-century global society—is but one particular historical form, one that has a beginning and an end, as do all historical forms and institutions.

  4. June 15, 2015

    Insightful. Thanks Glenn.

    – Greg

  5. July 27, 2015

    Just found your blog Greg – a lot of inciteful stuff on here!

    Coming from a scientific discipline (Neuroscience) I always find myself looking for data to back up my decisions. But so often in business (as in economics) that data isn’t the result of changing a single variable and measuring its effect and as such it is often difficult to establish any causality.

    How do we combat this? Question our assumptions more or simply abandon data driven “scientific” management and all trust our guts?

  6. July 27, 2015

    I think the answer is somewhere in between. It’s important to take data seriously, the trouble comes when we try to over-engineer solutions. For example, the financial crisis largely happened not because the error in the models were so great, but we had far too much confidence in them and pushed them past their limits. Human judgment was largely taken out of the equation because the traders largely didn’t understand the understand the underlying models nor their limitations.

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS