How America’s Health Care Fell Ill

PrintPrintEmailEmail

There is another effect of employer-purchased health insurance, unimagined when the system first began, that has had pernicious economic consequences in recent years. Insurers base the rates they charge, naturally enough, on the total claims they expect to incur. Auto insurers determine this by looking at what percentage of a community’s population had auto accidents in recent years and how much repairs cost in that community. That is why they charge someone living and driving in Manhattan far more every year than they would someone living in Frozen Sneakers, Iowa.

This system is known as community rating. Needless to say, the company also looks at the driver’s individual record, the so-called experience rating. If someone has had four crackups in the last year, he will have at the least a lot of explaining to do and is likely to be charged a far higher rate—if he can get insurance at all. Most insurance policies are based on a combination of the two rating systems.

But in most forms of insurance, the size of the community that is rated is quite large, and this eliminates the statistical anomalies that skew small samples. In other words, a person can’t be penalized because he happens to live on a block with a lot of lousy drivers. But employer-based health insurance is an exception. It can be based on the data for each company’s employees, which are right at hand.

In order to compete with Blue Cross and Blue Shield, other insurers began to look for companies whose employees had particularly good health records and offer them policies based on that record at prices far below what Blue Cross and Blue Shield—which by law had to take all comers—could offer. This tactic, known in the trade as cherry picking, concentrated the poor risks in the Blue Cross and Blue Shield pool and began to drive up their premiums alarmingly. It also meant that small companies had a much harder time buying affordable health insurance, because a single employee with a poor health record could send their experience rating way up. The effects of this practice are clear: 65 percent of workers without health insurance work for companies with twenty-five or fewer employees. (Another consequence of cherry picking is that many people with poor health are frozen in their jobs, for they know they would never get health insurance if they left.)

By 1960, as the medical revolution quickly gathered speed, the economically flawed private healthcare financing system was fully in place. Then two other events added to the gathering debacle.

In the 1960s the federal and state governments entered the medical market with Medicare for the elderly and Medicaid for the poor. Both doctors and hospitals had fought tooth and nail to prevent “socialized medicine” from gaining a foothold in the United States before finally losing the battle in 1965. As a result of their over-my-dead-body opposition, when the two programs were finally enacted, they were structured much like Blue Cross and Blue Shield, only with government picking up much of the tab, and not like socialized medicine at all. Medicare and Medicaid proved a bonanza for health-care providers, and their vehement opposition quickly and quietly faded away. The two new systems greatly increased the number of people who could now afford advanced medical care, and the incomes of medical professionals soared, roughly doubling in the 1960s.

 

But perhaps the most important consequence of these new programs was the power over hospitals that they gave to state governments. Medicaid quickly made these governments by far the largest source of funds for virtually every major hospital in the country. That gave them the power to influence or even dictate policy decisions by these hospitals. Thus these decisions were more and more made for political reasons, rather than medical or economic ones. Closing surplus hospitals or converting them to specialized treatment centers became much more difficult. Those adversely affected—the local neighborhood and hospital workers’ unions—would naturally mobilize to prevent it. Those who stood to gain—society as a whole— would not.

 

The second event was the litigation explosion of the last thirty years. For every medical malpractice suit filed in this country in 1969, three hundred are filed today. This has sharply driven up the cost of malpractice insurance, passed directly on to patients and their insurance companies, of course. Neurosurgeons, even with excellent records, can pay as much as $220,000 a year for coverage. Doctors in less suit-prone specialties are also paying much higher premiums and are forced to order unnecessary tests and perform unnecessary procedures to avoid being second-guessed in court.

Because of all this, it followed as the night does the day that medical costs began to rise over and above inflation, population growth, and rapidly increasing capability. The results for the country as a whole are plain to see. In 1930 we spent 3.5 percent of the country’s gross national product on health care. In 1950 it was 4.5 percent; in 1970, 7.3 percent; in 1990, 12.2 percent. American medical care in the last six decades not only has saved the lives of millions who could not have been saved before and relieved the pain and suffering of tens of millions more but also has become a monster that is devouring the American economy.

Is there a way out?