American College of Physicians: Internal Medicine — Doctors for Adults ®

Advertisement

Under new chief, AHCPR finds life after guidelines

The agency shifts from creating clinical protocols to sponsoring evidence-based outcomes research

From the November 1997 ACP Observer, copyright 1997 by the American College of Physicians.

By Deborah Gesensway

WASHINGTON—He may work behind the scenes, but his position is central. John M. Eisenberg, MACP, the new administrator of the federal Agency for Health Care Policy and Research (AHCPR), has been serving as senior advisor to HHS Secretary Donna Shalala on all issues relating to the quality of health care.

It's a big job, and his responsibilities include coordinating HHS' role in the President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry, which was appointed by President Clinton to help consumers obtain quality in health care. But his primary task is to guide AHCPR as it changes its mission from creating clinical guidelines to helping medicine embrace evidence-based practice.

Until last spring, when he began working with the AHCPR, Dr. Eisenberg, 51, was chairman of the department of medicine at Georgetown University Medical Center in Washington, D.C., and an ACP Regent. Before that, he was chief of the division of general internal medicine and a professor at the University of Pennsylvania. He has been president of the Society for General Internal Medicine and the Association for Health Services Research, as well as vice president of the Society for Medical Decision Making. He was chairman of the Congressional Physician Payment Review Commission.

Dr. Eisenberg met with ACP Observer to discuss his priorities and goals for the agency.

ACP Observer: Most physicians associate AHCPR with clinical practice guidelines. Why did the agency stop producing those guidelines?

Dr. Eisenberg: Before I came to AHCPR, the agency convened users of guidelines and people involved in the guideline panels. The agency asked what they thought was good about what we did. Most people said that what the agency did well was convene the experts, fund the analysis of the literature, and do it in a way that was impartial, and to make that evidence available. They said that it was nice that the guidelines were written, but the key part was pulling the evidence together. So the agency staff concluded that AHCPR had a special niche here. Let's do what we do best. Let's support the scientific basis for understanding what works and what doesn't work. Others can take that information and produce guidelines.

We have some capacity to do research on outcomes and quality ourselves, but most of what we will do is sponsor research through contracts and grants. We think of this mission as developing tools, talents and teams. We want to develop the tools we can give researchers and also invest money in training researchers and others who can use the research. We also will develop the teams to carry this out. One example is the new evidence-based practice centers around the country.

Q: How do these centers work?

A: They will ask the question, "What is the evidence for how this particular service ought to be provided or how this particular disease ought to be treated?" They'll produce an evidence report that says "Here's what the literature says right now." There are few organizations that can afford to do this work, so it's appropriate that AHCPR do this. AHCPR is funding 12 centers across the country. [ACP is formally collaborating with one at the University of Texas, San Antonio Cochrane Center.]

In some ways, this is similar to the way the College contracts with researchers to write the background reports for the Clinical Efficacy Assessment Project (CEAP) committee. Then ACP writes its guidelines based on that analysis. In our case, I can envision that different professional societies might use an evidence report in different ways. One may want to write its own guideline. Another may want to use it for continuing education.

Through their activities, AHCPR and ACP set the standards to which others writing guidelines now ought to aspire. We showed how it could be done, but it's expensive. The agency was only able to produce 19 guidelines.

Q: Are there any guidelines still in the pipelines?

A: We had two guidelines left at the end of the guideline period. Colorectal cancer screening has been done as a prototype of an evidence report. And our headache guideline is being divided into four or five different topics and will be released as evidence reports.

Q: Won't this new system—different organizations all creating their own guidelines—lead to conflicting recommendations, which doctors find frustrating?

A: We should celebrate the diversity of different guidelines, because guidelines are a snapshot of what we know at a particular time. If there is diversity in guidelines, that suggests to us that more research is needed in that field.

Most important, we want to make sure that these guidelines are available for the practicing community. When you have 15 minutes for a visit, you can't spend a substantial part of that time searching for a guideline.

Q: How will AHCPR help physicians get their hands on those guidelines?

A: An important AHCPR initiative is the new National Guideline Clearinghouse. [The target date for launching the clearinghouse, which will be in the form of an Internet Web site, is fall 1998.] It will provide a side-by-side analysis of the guidelines, so that, for instance, an ACP guideline on PSA testing for prostate cancer screening might be side-by-side with an American Urological Association guideline.

Q: Health plans, insurers and governments have at times used the results of health services research to punish doctors who deviate from the norm. What can researchers do to get doctors to buy in to the use of tools, like guidelines, that aim to improve the outcomes of care?

A: The research community understands better than ever the need to look into other factors that influence outcomes. For instance, it would be a mistake to judge a doctor solely on the basis of outcomes without looking at the patients who come into the practice. We are learning that to properly consider the right way to pay plans and doctors and the right way to evaluate their outcomes, you need to know the risks—the kinds of patients who come to the practice—and you need to know severity, which are the problems the patients have once they are in the practice or the plan. Neither will be sufficient alone to adequately evaluate the quality of care that's being provided.

Researchers are also getting more sophisticated in looking at how well doctors follow through on care they know they should be giving. What proportion of patients got a thrombolytic agent when the patient had a myocardial infarction? What proportion of women in a health plan had mammography? You can profile physicians' utilization and you can say that one doctor is high or one doctor is low, but it is difficult to measure whether that care was appropriate for the patient's problem. To do that, you have to get more precise clinical information about the patients so you can understand the kind of clinical circumstances in which the clinical decisions were made. Now what we are all interested in observing—and hopefully evaluating—is the impact of computerized databases in the hospital on evaluating the processes of care.

Q: Are doctors ready for evidence-based practice?

A: I think most physicians find it frustrating to practice in the dark, and they would rather not be flying by the seat of their pants when they are practicing. If you have to practice with uncertainty—and doctors always will—it would be comforting to know that the uncertainty comes because the literature isn't available, not because the literature is available and you don't have it.

There's a difference between the evidence that exists in one physician's experience, and the evidence that exists in the literature. You can never replace the nuances that you get from experience and from interaction with patients with pure numbers. But you also can't replace good statistical analysis, solid epidemiology and health services research with one person's experience. Neither alone will suffice.

This is a printer-friendly version of this page

Print this page  |  Close the preview

Share

 
 

Internist Archives Quick Links

Have questions about the new ABIM MOC Program?

Have questions about the new ABIM MOC Program?

ACP explains the ABIM requirements and offers many free solutions to earn MOC points.

One Click to Confidence - Free to members

One Click to Confidence - Free to members ACP Smart Medicine is a new, online clinical decision support tool specifically for internal medicine. Get rapid point-of-care access to evidence-based clinical recommendations and guidelines. Plus, users can easily earn CME credit. Learn more