Skip to content

Why Business Strategy Shouldn’t Be “Scientific”

2019 November 17
tags:
by Greg Satell

When the physicist Richard Feynman took the podium to give the commencement speech at CalTech in 1974, he told the strange story of cargo cults. In certain islands in the South Pacific, he explained, tribal societies had seen troops build airfields during World War and were impressed with the valuable cargo that arrived at the bases.

After the troops left, the island societies built their own airfields, complete with mock radios, aircraft and mimicked military drills in the hopes of attracting cargo themselves. It seems more than a little silly, and of course, no cargo every came. Yet these tribal societies persisted in their strange behaviors.

Feynman’s point was that we can’t merely mimic behaviors and expect to get results. Yet even today, nearly a half century later, many executives and business strategists have failed to learn that simple lesson by attempting to inject “science” into strategy. The truth is that while strategy can be informed by science, it can never be, and shouldn’t be, truly scientific.

Why Business Case Studies Are Flawed

In 2004, I was leading a major news organization during the Orange Revolution in Ukraine. What struck me at the time was how thousands of people, who would ordinarily be doing thousands of different things, would stop what they were doing and start doing the same thing, all at once, in nearly perfect unison, with little or no formal coordination.

That’s what started the journey that ultimately resulted in my book, Cascades. I wanted to harness those same forces to create change in a business context, much like the protesters in Ukraine achieved in a political context and countless others, such as the LGBT activists, did in social contexts. In my research I noticed how different studies of political and social movements were from business case studies.

With historical political and social movements, such as the civil rights movement or the United States or the anti-Apartheid struggle in South Africa, there was abundant scholarship often based on hundreds, if not thousands of contemporary accounts. Business case studies, on the other hand, were largely done by a small team performing a handful of interviews.

When I interviewed people involved in the business cases, I found that they shared some important features with political and social movements that weren’t reported in the case studies. What struck me was that these features were noticed at the time, and in some cases discussed, but weren’t regarded as significant.

To be clear, I’m not arguing that my research was more “scientific,” but I was able to bring a new perspective. Business cases are, necessarily, usually focused on successful efforts, researched after the fact and written from a management perspective. We rarely get much insight into failed efforts or see perspectives from ordinary customers, line workers, competitors and so on.

The Halo Effect

Good case studies are written by experienced professionals who are trained to analyze a business situations from a multitude of perspectives. However, their ability to do that successfully is greatly limited by the fact that they already know the outcome. That can’t help but to color their analysis.

In The Halo Effect, Phil Rosenzweig explains how those perceptions can color conclusions. He points to the networking company Cisco during the dotcom boom. When it was flying high, it was said to have an unparalleled culture with happy people who worked long hours but loved every minute of it. When the market tanked, however, all of the sudden its culture came to be seen as “cocksure” and “naive.”

It is hard to see how company’s culture could change so drastically in such a short amount of time, with no significant change in leadership. More likely, given a successful example, analysts looked at particular qualities in a positive light. However, when things began to go the other way, those same qualities were perceived as negative.

So when an organization is doing well, we see them as “idealistic” and “values driven,” but when things go sour, those same traits are seen as “arrogant” and “impractical.” Given the same set of facts, we can, and often do, come to very different conclusions when our perception of the outcomes changes.

The Problem With Surveys

Besides case studies, another common technique to analyze business trends and performance are executive surveys. Typically, a research company or consulting firm sends out questionnaires to a few hundred executives and then analyze the results. Much like Feynman described, surveys give these studies an air of scientific rigor.

This appearance of scientific rigor is largely a mirage. Yes, there are numbers, graphs and pie charts, much as your would see in a scientific paper, but there are usually important elements missing, such as a clearly formulated  formulated hypothesis, a control group, and a peer review process.

Another problematic aspect is that these types of studies emphasize what a typical executive thinks about a particular business issue or trend. So what they really examine is the current zeitgeist, which may or may not reflect current market reality. A great business strategy does not merely reflect what typical executives know, but exploits what they do not.

Perhaps most importantly, these types of surveys are generally not marketed as simple opinion surveys, but as sources of profound insight designed to help leaders get an edge over their competitors. The numbers, graphs and pie charts are specifically designed to look  “scientific” in order to make them appear to be statements of empirical fact.

Your Strategy Is Always Wrong, You Have To Make It Right

We’d like strategy to be scientific, because few leaders like to admit that they are merely betting on an idea. Nobody wants to go to their investors and say, “I have a hunch about something and I’d like to risk significant resources to find out if I’m right.” Yet that’s exactly what successful business do all the time.

If strategy was truly scientific, then you would expect management to get better over time, much as, say, cancer treatment or technology performance does. However, just the opposite seems to be the case. The average tenure on the S&P 500 has been shrinking for decades and CEOs get fired more often.

The truth is that strategy can never be scientific, because the business context is always evolving. Even if you have the right strategy today, it may not be the right strategy for tomorrow. Changes in technology, consumer behavior and the actions of your competitors make that a near certainty.

So instead of assuming that your strategy is right, a much better course is to assume that it is wrong in at least some aspects. Techniques like pre-mortems and red teams can help you to expose flaws in a strategy and make adjustments to overcome them. The more you assume you are wrong, the better your chances are of being right.

Or, as Feynman himself put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

– Greg

 

Image: Pixabay

 

4 Responses leave one →
  1. November 18, 2019

    This sounds a lot like antifragility—making sure that the system is capable of not only dealing with chaos, but also that it gets better from it. I think it applies quite well to business models and I’ve started using it in my consulting. It makes a lot of sense to plan for positive reactions to a dynamic environment. Plan for the 6 sigma events, because they happen,

  2. November 18, 2019

    Hi Adam,

    I think the two are related. However, the point I’m making here is slightly different. I think the desire to be seen as more “scientific” creates worse strategy. Instead of introducing more doubt and rigor it often does the opposite.

    – Greg

  3. November 18, 2019

    Thank you Greg. The “core” of science is in fact that it never knows and is never done. The “method” implies reviewing the results and testing once more. Almost no experiments truly produce the theoretical results so it is always that the overwhelming evidence supports the conclusion rather than it being “proof”. And so it is with reality in that conditions do keep changing and so must our reactions. This is not a condemnation of science, but rather an embrace of it and while we must usually proceed on what we consider data, we must also be open to it changing and embrace that as well.
    Surely, science has been wrong in the past, but it is often the best we can do. The issues come with unbending loyalty to information that is in the midst of being disproved, or changing. Much of what we acted successfully on in the past may no longer work out, so we need to be open and take change as it comes. It is not a threat, it just is.
    It seems that the key is to gauge when to change horses, and those that make the leap in good time can get well ahead.

  4. November 19, 2019

    As always, thanks for sharing your thoughts Robert!

    – Greg

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS