Monday, May 15, 2020

Predictably Unpredictable (Part 1)

In today's organizational environment, the ease computers give us in collecting and analyzing enormous amounts of data usually works against good decisions. Commonsense says more and better information must improve the ways people make choices. What commonsense misses is the finite capacity of the human brain to absorb, understand and bring together all that data into a state where it can do some good.

Statistics, especially anything about probability, are nearly incomprehensible to the average person. That goes for the average leader — and many of the non-average ones — as well. It wouldn't matter much, except that most organizations today are run on the basis of statistical computations: key indicators, ratios, indexes of performance, comparisons, norms and statistical charts. Unfortunately, it rarely works as intended.

To control an organization through the collection of data, its leaders need to be certain they can do two things: make accurate forecasts systematically and over long periods of time; and understand precisely what effect any change of policy will have.

Neither of these requirements can be met in organizational life. Nearly all forecasts are incorrect even over short time-scales. The markets, the gyrations of the economy, the political and financial outlook are too complex to be predicted, even by the best-equipped of forecasters. There are too many variables and too many chance interactions. Organizational “forecasts” are statements of hope and purpose; they don’t tell you what will happen.

Nor can leaders be certain of the effect of any policy decision. The workings of chance are too strong. Governments try constantly to create the outcomes they desire by wielding far greater powers than any CEO. Yet most government policies too fail to produce what they are designed to create. Even the most despotic dictatorships, like the former Soviet Union under Joseph Stalin or North Korea today, cannot make their own country function as they desire.

The Laws of Large Numbers

Statistics have been aptly called “the laws of large numbers.” The findings of most statistical techniques apply only when the numbers involved are large. When numbers are small, the amount of uncertainty can be so great the finding becomes useless. That's why drug manufacturers, who use some of the most sophisticated and complex statistical methods in their product trials, can still be caught out by an unexpected side-effect or drug interaction.

Businesses rarely use statistical techniques with any degree of sophistication. Mostly they accept whatever “the numbers” show, regardless of sample size, the likelihood the sample is a fair one, or anything else. In their desire to find a way of predicting what is essentially unpredictable — the behavior of their markets and the businesses that trade there — they clutch at almost any scientific-sounding straws.

One example will suffice. Key indicators are used to show when things are going well or badly. They do this, it is assumed, because changes in the indicator — a number or calculation — correlate closely with changes in the real world. It’s important to be clear correlation isn’t cause and effect. That’s one of the commonest mistakes. Correlation means only that some statistical link has been found. Changes in business activity don’t cause the key indicators to move; the indicators move by themselves, in ways that are believed to correspond to events. The uncertain nature of this link is often misunderstood.

Strong correlations are hard to find in real-world data, but let's suppose an extremely strong correlation to illustrate this misunderstanding. A correlation of 0.8 would be an amazingly powerful link in statistical terms. But what does it mean?

It doesn't mean you can be 80% certain of the outcome; or that there's an 80% chance that what you predict from the correlation will happen. A link like this is often, erroneously, expressed as “this predictor is 80% accurate.” It isn't. Even 80% accuracy allows that one chance in five won't succeed, but the true odds here are far worse. A correlation of 0.8 accounts for only 64% of the variability in the situation. It's the square of the correlation that shows how much variability it's accounting for. Around 36% of variability is not covered by the key indicator at all.

In real situations, correlations of 0.3 or 0.4 would be more likely to be the best available. That means the link between the key indicator and that aspect of the business would cover between 9% and 16% of all the sources of variability. That’s not much of a link. For it to be a link at all it relies on large numbers — not your one chance at a business breakthrough, but maybe thousands of attempts by hundreds of companies, most of which will still fail.

Leadership cannot be a mechanistic process, nor even a precise science. There’s too much uncertainty for either to work. Why do people follow inaccurate “key indicators” like they do? Mostly, I suspect, because they’ve been told it’s the right thing to do; ignoring them risks being blamed for any subsequent mistakes — though the mistakes would probably have happened anyway. Tracking figures feels more objective than it is. It feels safer than it is too. Yet, whatever else, it’s quicker and simpler. A page of summarized, simplified numbers is much easier to deal with than the messy, unpredictable and uncertain situations of the business itself

In Part 2 of this short series, I’ll explain why many of the fashionable types of analyses and predictions are more like disorganizing principles than organizing ones.




Stumble Upon Toolbar

Comments:
Hear, hear.

I'm a scientist doing R and D for a corporation. Mathematically, I am not exactly average, with an undergraduate minor in math, PhD in a physical science. But statistics is still something that I struggle with. I can work it out, but it is that: work and lots of it. One has to be very careful that one is not just calculating nonsense. One needs to believe that the data has some integrity. One has to test and test and test for bias.

There is a cult of the single number- a quick measure, however dubious, that tells you how close you are to some, perhaps poorly-defined, goal. I'd like to see the stats on how much time is wasted producing numbers like that.
 
Dave, you're an honest guy. Many people far less qualified than you still won't admit to haing problems problems with statistics.

You're also absolutely right that the futile search for a single number to encapsulate some pretty complex situations wastes massive amounts of time. Of course, when you get "the number," it usually tells you more or less nothing. Meanwhile, you could have been spending a very productive time understanding the problem, instead of trying to reduce it to some poorly understood "indicator."
 
You make some very good points in this post -- but I'd like to add a little more disturbing information about this reliance on statistics and data.

I work in the education field -- and the hot buzz-word this year is "data-driven decision making". (Inspired by No child Left Behind)

Now let's consider that just for a moment. Educators -- people who have, in most cases, NEVER taken a statistics class -- are now "analyzing" loads of data to make instructional decisions.

The results that I have seen with my own eyes (and I am not a statistician or a psychometrician) -- administrators and teachers making changes to programs and instructional techniques based on ONE set of data (the once-a-year state standardized exam).

I do know enough about standardized testing to know that it tells you NOTHING about what is going on in a classroom or what kind of progress individual students have made.

The once-a-year standardized exam only tells you one thing -- how those students performed on that particular test on that particular day.

And -- those results can be affected by any number of things -- hunger, lack of sleep, lack of parental support at home, Limited-English Proficiency (LEP) status, etc.

In order to truly understand how students are progress and what instructional methods or programs are helping or indering progress we need tor "triangulate" multiple sets of data -- everything from standardized test scores to classroom observation data to climate surveys and attendance/discipline data.

But -- now that the education field is following in the footsteps of business... the test scores are our "bottom-line" and the focus is only on that -- not on long-term improvement or growth.
 
The once-a-year standardized exam only tells you one thing -- how those students performed on that particular test on that particular day.

Rock. Dead-on. I've often thought that, too.

A survey conducted by some company with 10,000 participants only describes the way those people felt and thought at that singular point in history.
 
Post a Comment



<< Home
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 2.5 License.

This page is powered by Blogger. Isn't yours?