Statistics help us comprehend the world—sometimes
In one of the most famous metaphors in western thought, Plato wrote about a man chained in a cave. Unable to see outside, all he could know of the world beyond his prison was what he was able to deduce from the shadows that were projected on the wall by whatever passed by. Plato was talking about how we all are prisoners of our own bodies and know only what our senses tell us of the world beyond.
We are still imprisoned in our own bodies, of course, and presumably always will be. But in the modern world we all are chained in caves of another kind as well, caves where the shadows on the walls are what we call statistics. We cannot see or touch or hear “the American economy.” So to get a grasp of it, and much of what depends on it, we need numbers, numbers that make an intellectual abstraction “visible” to the human mind.
Statistics are both ancient and astonishingly modern. When the shepherds of Plato’s Attica counted their flocks, they were, of course, making a statistical analysis. But the word itself entered the English language only in 1787. It is no coincidence that that was also the decade James Watt patented the rotary steam engine. As the factory system, which had evolved in the middle of the eighteenth century, and the new power source spread, enterprises could operate on a far larger scale.
This meant that seat-of-the-pants management became less efficient. The owner of a business that employs hundreds or thousands of people needs statistics. As the mathematics of statistics, and thus their power to illuminate an increasingly complex world, rapidly advanced in the nineteenth century, governments began to utilize them more and more as well.
And because politicians like to point with pride at statistics that make them look good, and view with alarm statistics that make their opponents look good, it is not surprising that they were soon making tendentious use of them. Less than a hundred years after the word statistics entered the language, Benjamin Disraeli opined that there were three forms of mendacity: “lies, damned lies, and statistics.”
Politicians are only part of the problem. Because bad news sells more newspapers than good news, bad statistics are likely to be played up in the news media. Further, while schools often teach how to create statistics, they seldom teach how to interpret them and spot the phonies. Thus political reporters and their editors often simply pass them on unvetted, so to speak.
Analyzing statistics is crucial to understanding what is really going on. Some statistics are both “hard” and easy to interpret. You don’t need to know much about baseball to be awed by Hank Aaron’s 755 home runs or Nolan Ryan’s seven no-hitters. But many statistics are soft. The unemployment number that comes out once a month counts only people who are looking for jobs, a vague concept if ever there was one. Every comedian worth his shtick has a brother-in-law who’s been “looking for a job” for years.
That is why a new book by W. Michael Cox and Richard Alm, Myths of Rich & Poor: Why We’re Better Off Than We Think (Basic Books, $25.00), is so useful. The authors, an economist with the Federal Reserve Bank in Dallas and a business journalist, tease apart the statistics that are bandied about so widely in the media to see what’s really there. The results are a revelation.
Consider the subject of wages, which have been “falling” in the United States since the early 1970s. In the last two decades the decline has amounted to fully 15 percent per capita (after inflation is factored in). Looking at the raw number, many commentators have been proclaiming the end of the American dream, sinking living standards, and coming social unrest. Yet a few other statistics give a completely different picture. In 1970 the average American house had 1,500 square feet; today it has 2,150 square feet. Meanwhile, the average number of persons per household has fallen from 3.14 to 2.64. The number of Americans who took a cruise in 1970 was 500,000; the number in 1997 was 4.7 million.
Will Rogers once said that America was the first society in history to go to the poorhouse in an automobile. Is that what’s going on here? After all, in 1970 only 29.3 percent of American households had two or more vehicles; by 1995 fully 61.9 percent did. The answer to the question is, of course, no. The problem lies in the differing meanings of the word wages .
The average man on the street tends to think of wages as the same as income, what a family brings home to pay the bills. But wages are only the cash earned in exchange for labor. A man without a job, but with a ten-million-dollar trust fund, has no wages but a lot of income. If you look at what statisticians call “per capita personal income”—i.e., the total output of the American economy divided by the total population—you get a very different story. Unlike wages, personal income captures savings income, rents, profits, and other sources of spendable bucks. And while wages have been falling at an average of 0.7 percent a year since the early 1970s, personal income has been rising at the rate of 1.6 percent a year. It’s up more than 40 percent since 1973, about the same as the average increase in living space. Although wages have been falling, fringe benefits have been surging, thanks in part to the country’s tax laws. While “free” health care has been a feature of the American economy since the 1940s, today eye care, dental care, day care, paid maternity leave, and generous stock purchase plans are increasingly common. Since 1970 nonmonetary benefits have risen by a third relative to wages.
But historians as well as newspaper readers have to beware of being bamboozled by statistics. Consider one of the most famous statistics in the world, the Dow Jones Industrial Average. When people ask, “What did the market do today?,” that’s the number they want. You can now watch the Dow Jones go up and down by the second on TV or the Internet.
The Dow is also an invaluable tool for looking at the history of Wall Street, for it is the oldest continuous stock market average around, having been started on May 26, 1896. The genius behind this wonderfully simple idea was Charles Dow, the cofounder of The Wall Street Journal . “The stock market,” Dow thought, “is in the nature of a barometer which reflects the rise and fall of general conditions.” But how to read the barometer? Newspapers already published closing stock prices, but they didn’t tell the reader what the market as a whole was doing. So Dow created two averages, one of railroad stocks—the blue chips of the day—and one for industrial companies, then considered much riskier.
The first industrial average had twelve stocks in it, and it closed that first day at 40.94. Dow created his market index just in time to see the market tank. By August it was down to 28.48, a 30 percent decline. The market soon recovered, and within ten years the Dow Jones had topped 100. In the 1920s it soared, finally topping out at 381.17 on September 3, 1929 (a year after the average had been enlarged to thirty stocks). Then, of course, it began to fall and fall and fall. Finally, in July 1932, it reached bottom: 41.22, only a quarter point above where it had begun thirty-six years earlier. It took the Dow twenty-five years to reach its 1929 high once again, and another eighteen before it finally closed over 1,000 for the first time. It would be another decade before it closed under 1,000 for the last time. Since then, of course, the Dow, give or take a few dips, has been up, up, and away.
How good is the Dow as a measure of the stock market? Certainly there have been complaints about it almost since the day it was born. For one thing, it is price-weighted rather than market-weighted. This means that the high-priced stocks in the average count for more than low-priced ones, regardless of their market capitalization (the price of their shares times the total number of shares outstanding). The reason the average is calculated this way is easy enough: Charles Dow needed to be able to do it quickly, with paper and pencil. Another cause for complaint is that there are only thirty stocks in the average, while there are tens of thousands of stocks being traded every day on Wall Street.
And the Dow has changed over the years. General Electric is the only stock today that remains from the original Dow. Some companies were removed because they collapsed into bankruptcy or merged with other companies, some because the keepers of the flame at Dow Jones thought others would represent the economy better. This has had no small effect on the Dow and on the history of the market.
In 1939, for instance, IBM, then a manufacturer of business equipment and tabulating machines, was removed from the Dow to make room for AT&T, which, except for its Western Electric subsidiary, wasn’t an industrial company at all. But in the next forty years, until IBM was readmitted to the Dow in 1979, AT&T’s stock-price performance was unimpressive, to say the least, merely doubling. Meanwhile, IBM became one of the legends of Wall Street, splitting its stock no fewer than twenty-nine times in those forty years while its price increased a staggering 22,000 percent.
Had the powers-that-were at Dow Jones simply left well enough alone, the stock market would have recovered its 1929 high, crossed a thousand, and begun its climb into the stratosphere much sooner than it did. The actual history of the stock market would have been different as well. For the players in the market would have reacted differently to the very different readings Charles Dow’s barometer would have given them.
The history of the Dow Jones and Cox and Alm’s book both demonstrate beyond doubt that statistics are not windows into worlds we cannot see, but only shadows of those worlds. To change metaphors abruptly and borrow a marvelous phrase I believe is from William F. Buckley, Jr., we must be careful not to build cathedrals around them.