Docs Give Grading System a Failing Grade

May 28th, 2008 by

The Group Insurance Commission (GIC) in Massachusetts came up with a nifty idea: let’s grade physicians based upon efficiency and competence; we’ll reward those with high marks and penalize those who are (relative) failures. (The GIC administers health plans for public sector employees.) The GIC worked with the MA Medical Society (MMS) and a number of insurance carriers to come up with a reasonable methodology and metrics for grading doctors. After four years of planning, the GIC rolled out the program. Unfortunately, the MMS rolled out the lawyers: they are suing GIC and a number of health plans for defamation, interference, breach of contract, bad faith and violation of due process. Other than that, Mrs. Lincoln, what did you think of the play?
The suit claims doctors have been capriciously ranked into tiers, from 1 through 3, based upon a faulty analysis of billings. The tiers assigned to a given doctor result in progressively higher co-payments for their patients. For example, the Tufts Health plan has established the following co-pays for doctor visits:
Tier 1 doctor = $15 co-pay
Tier 2 doctor = $25 co-pay
Tier 3 doctor = $35 co-pay
The MMS claims, first, that the tiering system is based upon faulty data. For example, one doctor who specializes in treating severe cases of multiple sclerosis has an inflated “cost per patient” due to her inter-disciplinary approach. She has a tier 3 ranking. But this low score does not take into account the severity of her patients’s conditions or her success in treating them. In another example, one doctor simply examined medical records and provided an interpretation: he was held accountable for the ultimate treatment provided to patients he never actually saw.
With low rankings based upon incomplete and often inaccurate data, the MMS concluded that good doctors have been defamed.
In addition, MMS claims that patients have been defrauded, by being directed toward certain doctors for no particular reason. They pay less for tier 1 visits, even though they may not be getting the best available services; conversely, they have to pay substantially more for tier 3 visits, even though the quality may well exceed that of tier 1 doctors.
Dr. Bruce S. Auerbach, president of the MMS, said efforts to improve the tiering program have failed.
“There is a right way to do this, and a wrong way – and the Clinical Performance Improvement initiative is definitely not the right way.”
“We have worked with the GIC for four years to improve its program, and the agency has made changes in some limited areas. However, the GIC has refused to correct the CPI’s most glaring problem, which is its ranking of individual physicians using inaccurate, unreliable, and invalid tools and data.”
Not Close Enough
We all know that there are physicians whose services are mediocre and at times, dangerous. But the problem is in the data: how do you determine the quality of services? How do you distinguish between prudent and outrageous treatment? Data is data, but behind the numbers are stories of lives saved and lives ruined. Number crunching computers cannot tell the difference.
Unless the parties settle prior to trial, the discovery process will expose GIC’s methodology for grading doctors as clearly as an MRI. Based upon the MMA’s lawsuit, the GIC’s metrics appear to be fairly crude. The good news is that a number of mediocre doctors have been exposed. Unfortunately, the broad net cast by the tiering system has tainted the reputations of some very competent and compassionate physicians. In this particular endeavor, “reasonably close” assessments are not sufficient. The margin of error – where the reputation of a doctor is at stake – is very small indeed.
Medicine is both a science and an art. With the livelihoods of medical practitioners at risk, any methodologies for evaluating the quality and effectiveness of services must be precise and accurate to the nth degree. If your methodology cannot distinguish between incompetence and art, if it cannot place virtually every outstanding physician in the top tier, then the metrics are pretty worthless. At first glance, GIC’s admirable effort to triage the docs fails to pass muster. In all likelihood, the pending clash in court will send all the parties back to the proverbial drawing board.