American College of Physicians: Internal Medicine — Doctors for Adults ®

Advertisement

New breed of report cards turns up the heat on doctors

From the January 1995 ACP Observer, copyright 1995 by the American College of Physicians.

By Edward Doyle

In November 1992, the news at the Hospital of the University of Pennsylvania (HUP) was bad. The state had just released a health care report card detailing the mortality rates of patients who had undergone cardiac surgery in Pennsylvania. Three of the hospital's cardiac surgeons were identified as having excessive mortality rates compared to other surgeons in the state.

To make matters worse, the information was public and appeared in newspapers and on television newscasts everywhere. Calls came pouring in from the media and nervous patients. Within a year, the department's leadership had changed hands and the three surgeons had left HUP.

For many physicians, Pennsylvania's report card, one of the first in the country, confirmed their worst fears: that the report cards were designed to do nothing more than target and punish physicians who appeared to practice differently than their peers. Practitioners feared that they would get little if any chance to explain their records and would face professional ruin. However, little happened since, and physicians felt the threat was gone.

But as a new generation of report cards emerges, physicians are finding themselves once again confronting some of their original concerns. HMOs and health plans are poised to release a new breed of report cards, one that contains vast amounts of physician-specific data. And more than releasing them to the public, these organizations plan to use report cards to penalize physicians, to determine physicians' bonuses, to do whatever it takes to change the way they practice medicine.

Health care report cards are a booming business because they help health plans compete for big employer contracts in tough markets, particularly those markets where managed care has successfully controlled rising costs. "The issue of cost is still a dominate issue, but purchasers assume that any delivery system is going to be able to control costs in a positive way and they want more," explained David Epstein, MD, a consultant at Towers Perin in Atlanta.

What purchasers want is detail on what they're getting for their health care dollars. As a result, HMOs such as Kaiser Permanente and U.S. Healthcare have been producing report cards giving purchasers patient satisfaction information (how long it takes patients to get a doctor's appointment and how long they have to wait to see a physician) and the use of preventive measures such as eye exams for diabetics. The contents of these report cards vary, and they are generally not distributed to patients.

Very few if any health plans have actually provided physician data. Only New York's health department has followed Pennsylvania's example and released physician-specific report cards.

Naming names

But just because HMOs are not giving purchasers details on individual physicians doesn't mean that they aren't busy collecting it. Physicians may breathe a sigh of relief knowing that data about them won't make tomorrow's headlines, but HMOs' internal use of it may affect them much more dramatically.

Minneapolis-based United Healthcare, for example, regularly publishes report cards that contain patient satisfaction information and details of immunization rates. But in the process, the HMO has been amassing vast amounts of data on individual physicians. "In our reports on mammography rates and immunization rates, we can drill down and get an individual physician rate," explained Marcia Smith, director of operational performance. "We can say that Dr. Jones is giving women over age 50 mammograms 100% of the time and that Dr. Smith is doing it only 50% of the time."

There are signs that the HMO is not alone. A study conducted by employee benefits consultants Foster Higgins of Princeton, N.J., found that only 15% of the health plans surveyed and just over half of hospitals and large group practices said they release report card data to the public; the rest use it internally.

For example, Philadelphia-based U.S. Healthcare compiles very physician-specific data in individual report cards that are sent to physicians every six months. Each report tells physicians exactly how many U.S. Healthcare patients they have seen, how many patients have left their practice in the last six months, how they rate on patient satisfaction surveys and how many U.S. Healthcare educational seminars they have attended.

U.S. Healthcare uses these individual report cards as the basis for financial rewards. Physicians who perform flexible sigmoidoscopies in their office instead of referring them out, for example, get an extra percentage point added to their capitation rate. Physicians who keep their offices open for 70 hours a week get an extra 1.5 percentage points added to their capitation rate; keeping your office open 50 office hours a week earns only half a point. In all, the HMO's physicians can increase their capitation rate by nearly one-third under the guidelines spelled out by the HMO.

Other HMOs are not yet attaching financial rewards to their physician data but are instead using it to identify--and find alternatives to--costly practice patterns. When New Jersey-based Prudential Health Care Plan Inc. found that some of its physicians in Florida were hospitalizing cesarean-section patients for longer than expected, they used their physician data to identify outliers. They then worked with the physicians to find an acceptable alternative to extended hospitalization--in this case, using visiting nurses to check on the patients' surgical wounds--and were able to reduce hospitalization rates for these patients.

Statistics can be deceiving

But how valid are the report card data being collected and used by some HMOs to track down outlier physicians and change their behaviors?

  • Are the samples large enough? Take the example of a physician who sees only 20 female patients over age 50 from a health plan like Prudential. If only half of those patients receive a mammogram, does that mean that the physician does not use mammograms appropriately? "I don't know that you can draw conclusions from that kind of information unless the physicians you are measuring are seeing a lot of patients from your plan," explained Charles M. Cutler, ACP Member, vice president of medical services for Prudential.
  • Do the numbers tell the whole story? Even seemingly simple report card data--like the results of patient satisfaction surveys--can be deceiving. In conducting patient satisfaction surveys for the Health Care Outcomes Institute in Bloomington, Minn., for example, Bill Petersen, MD, has found that certain groups of patients--particularly Medicare and Medicaid patients--consistently rate their physicians low, no matter how good their care. "Patients over age 85 will almost never use the word 'excellent' when describing their physician," he said. "It's a cultural thing among older people."

General internists often have a hard time with patient satisfaction surveys, Dr. Petersen explained, because of the nature of their patient population and the work they do. One physician who is the only internist serving a group of 14 and three nearby rural towns, for example, consistently gets low patient satisfaction ratings. "He's interrupted constantly and his patients are annoyed, so he doesn't score very well," he said. "Plus, he sees a lot of chronically ill people who by the nature of their illness are discouraged with medicine in general."

Experts say that these examples underscore the need to do more than simply create report cards--to work with physicians in finding solutions. Dr. Cutler from Prudential, for example, said that changing physicians' behavior will require more than collecting and distributing information. "If people have a practice pattern that is not consistent with your standards, they probably don't know how to change that without some additional help," he explained. "To just send out a letter telling a physician he or she is an outlier isn't the answer."

New York state, which releases physician-specific report cards, follows them up by working with outliers to identify and address problems. When one hospital performed particularly poorly on a report card on the mortality rates of cardiac surgery patients, for example, hospital and state officials did some investigation. They found that candidates for standard cardiac surgery were not only low-risk but did not die any more frequently than other hospital's patients. What they found, upon further investigation, was that the mortality rate for emergency bypass procedures was nearly 20% higher than the state average. This pushed the hospital's overall rate up dramatically, making it appear as if there was a problem with all the hospital's patients. The hospital changed its procedures and better stabilized emergency patients before sending them to surgery, dramatically reducing their overall mortality rate.

  • Is the system being gamed? A problem area in both New York and Pennsylvania's report cards is the coding system physicians use to indicate just how sick their patients are--and how likely they are to die during surgery. In Pennsylvania, the state assesses the likelihood that cardiac surgery patients will die based on codes that hospitals provide. Patients coded at a low level of severity of illness are not expected to die during surgery; if they do die, the surgeon's mortality rating generally goes up.

But David Shulkin, ACP Member, chief medical officer at HUP, said that if a patient is admitted for surgery but goes to the operating room before he can have any diagnostic or lab tests performed, hospital personnel cannot provide the proper codes to indicate that the patient is sick and at risk of dying. If the patient dies during surgery, then, the mortality rate of the hospital and the physician will rise and both could be listed as outliers.

Physicians typically understand how the coding system works, and in New York and Pennsylvania, there are tales of hospital personnel upgrading codes to make patients appear sicker than they really are. And some physicians, Dr. Shulkin noted, have refused to operate on high-risk patients. "Physicians are weighing in their mind, some consciously and others more subconsciously, whether they are really willing to have another mortality statistic on their report card," he said.

New York Commissioner of Health Mark Chassin, MD, said that there is no statistical evidence to support such claims, but he said that he has heard some such stories--and that they often show a gross misunderstanding of how the system works. He heard one story of a surgeon who refused to operate on a patient with an ascending aortic aneurysm who also needed bypass surgery because the physician feared his mortality rating would be adversely affected. But Dr. Chassin noted that only isolated coronary artery bypass graft procedures are included in the state's reports. "Misconceptions are getting in the way of surgeons' understanding and using the data effectively," he lamented.

Working together

Perhaps because they have learned something from such stories, many HMOs say they are somewhat reluctant to release their physician-specific data to the public. "I would feel uncomfortable publishing numbers with physicians' names until we have an opportunity to work with them," United Healthcare's Ms. Smith explained.

Judging by experiences that institutions like HUP have had, publishing this information could be the worst thing HMOs could do. "One of the most important components of quality improvement is taking fear and threats out of the process," Dr. Shulkin said. "As soon as you publish a physician's name in the paper and link it to an excess mortality rate, you've eliminated a lot of opportunity for real system redesign and quality improvement activities. You've put on pressure to do quick and dirty fixes, which usually means changing criteria for patient selection or getting rid of physicians with bad statistics."

And if physicians are scared away from the quality improvement process, HMOs would likely take it over, leaving physicians without a voice in the process. "If we don't measure performance with clinical rigor ourselves," explained New York's Dr. Chassin, "it will be done in a sloppy way by people using data that aren't very clinically sound. These will be used by payers and employers to make decisions about who is in and who is out of networks, and it will not be focused on quality improvement."

This is a printer-friendly version of this page

Print this page  |  Close the preview

Share

 
 

Internist Archives Quick Links

MKSAP 16 Holiday Special: Save 10%

MKSAP 16 Holiday Special:  Save 10%

Use MKSAP 16 to earn MOC points, prepare for ABIM exams and assess your clinical knowledge. For a limited time save 10% when you use priority code MKPROMO! Order now.

Maintenance of Certification:

What if I Still Don't Know Where to Start?

Maintenance of Certification: What if I Still Don't Know Where to Start?

Because the rules are complex and may apply differently depending on when you last certified, ACP has developed a MOC Navigator. This FREE tool can help you understand the impact of MOC, review requirements, guide you in selecting ways to meet the requirements, show you how to enroll, and more. Start navigating now.