News Release

Keeping Score On Doctors: Report Says Flaws In Counting Must Be Addressed To Ensure Accuracy

Peer-Reviewed Publication

Duke University Medical Center

DURHAM, N.C. -- Agencies that rank doctors and hospitals need to make sure they are comparing apples to apples, or rankings can become skewed and unfairly penalize high-quality medical professionals, according to cardiologists James Jollis of Duke University Medical Center and Patrick Romano of the University of California, Davis.

In a critical analysis published in the April 2 New England Journal of Medicine, the physicians warn that the methods of data collection are not accurate enough to make results trustworthy. But, they add, if simple corrections are made, such scorecards can be a good way to ensure quality care.

"We are questioning whether these types of scorecards are really accurate enough to make available to consumers," Jollis said in an interview. "Hospital "scorecards" are here to stay, so as physicians we have the responsibility to be sure the methods used to generate rankings are as accurate as possible."

The researchers analyzed the Pennsylvania Health Care Cost Containment Council's 1996 report on how well doctors and hospitals in the state fared in taking care of heart attack patients. They found while the method used was, in general, sound, it had several flaws that could have skewed results. They chose Pennsylvania because it was the first state to implement a government-sponsored statewide ranking.

"The Pennsylvania report is a good intermediate step, but we need better systems for reporting outcomes before these type of rankings will be realistic," Jollis said.

The researchers say before such ratings systems are adopted and results made public, they should be subject to peer review to be sure the most accurate information is reported to the public.

In their analysis, Jollis and Romano argue that using information from hospital bills and insurance claims is not an accurate way to gather information about patient outcomes. They say such data, while more standardized than handwritten physician charts, often are not a clear reflection of quality of care or outcomes.

Jollis and Romano argue that it is particularly crucial for agencies rating doctors to account for differences among patients.

"Medicine, particularly with heart disease patients, is a highly interactive enterprise, with many physicians involved in a single case," Jollis said. "A heart attack patient may be treated by an emergency room physician, and transferred to a cardiologist's care if the case becomes particularly problematic."

For example, the researchers say 14 percent of patients assigned by Medicare as having been cared for by cardiologists were actually admitted by internists or family physicians. The ideal monitoring system, they say, should focus on the doctors who have the greatest influence on outcome, which for heart attack is the first physician involved, since early intervention is crucial in heart attack treatment. The Pennsylvania study didn't report the responsible physician that way.

In addition, the Pennsylvania study had other important shortcomings, they say. Each hospital stay was counted separately, so if a patient was transferred from one hospital to another for further treatment and died while in the second hospital, that patient was counted as a survivor at the first hospital but as a death at the second, and the methods used to define disease severity didn't include preexisting conditions that could have influenced how well patients did.

"Report cards are widely used by government, insurance companies, HMOs, and employers," Jollis said. "However, the quality of data is not yet up to the task of confidently identifying the best practices."

Jollis and Romano argue that future scorecards should:

Broaden the definition of acute myocardial infarction, or heart attack;
Use a consistent definition of responsible physician;
Redefine complications so preexisting conditions are accurately represented;

Combine information from all hospital stays for each patient. The researchers argue that doctors and hospitals need to recognize that the era of "scorecards" for medical professionals is here to stay, and if they want to make sure they are rated accurately, they should develop standardized ways of reporting patient data.

"As physicians, we already devote a large amount of time to documenting patient care, but in a narrative form that is not readily comparable," Jollis said. "If we redirect our efforts toward recording key data in a universally comparable format, we could avoid the types of inaccuracies that come from trying to extract outcome data from hospital claims."

"If we don't do it ourselves, insurance companies and government agencies will do it for us," he said.

###



Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.