https://immattersacp.org/archives/2013/10/presidents.htm

Maintenance of Certification is needed, but it needs to change

ACP will continue to press for a process that ensures physicians remain up-to-date on clinical knowledge, but in a much less burdensome way than the current methods.


One topic that comes up frequently in my conversations with members is the American Board of Internal Medicine's Maintenance of Certification (MOC) process. Based on the sample who raise this issue with me during my travels as President of the College, members are not happy. Some would like the entire MOC enterprise to disappear; others accept the principle of regular verification of the knowledge required to practice medicine but object to the complexity, burdensomeness and/or expense of the program.

In addition to these concerns, MOC raises interesting questions related to learning, expertise and assessment. First is the so-called “fallacy of self-assessment.” I will admit that I indulged in this one for a long time. My mental model of lifelong learning was that I would, in the course of my clinical practice, detect areas in which my fund of knowledge had become thin and, recognizing this, I would brush up, read more, seek out an expert or otherwise bring myself up to speed.

The problem with this scenario is that it depends on my ability to recognize suboptimal performance. Sadly, there is extensive empirical evidence that, absent feedback on our performance, our ability to accurately self-assess is very poor across a wide range of domains. We need data on our performance, ideally data in a variety of forms and from a variety of sources, to understand how we are doing. Fund of knowledge testing is potentially a useful source of data.

A second issue is that different physicians, even those in the same specialty or subspecialty, do different things. Theories of workplace learning recognize that people can acquire deep conceptual understanding and expertise outside formal educational settings. From this perspective, what a practicing physician does in the course of caring for her patients can be considered a curriculum.

Inevitably, a clinician with one area of focus learns different things and develops a different expertise than a peer with identical formal education but a different postresidency practice. Recognizing this, many ACP members ask to be tested on what they actually do, not the full range of internal medicine.

The idea that physicians should only be tested on what we routinely see is problematic. First, patients don't necessarily know how a physician has individualized his practice. Furthermore, established patients can develop new problems that are outside their clinician's focus area. Finally, there is a core of internal medicine, a shared foundation of knowledge and understanding that allows internists to care collaboratively for patients with complex or emerging conditions. For all these reasons, it is reasonable to require that we demonstrate that we have maintained and extended our command of core internal medicine.

Patients depend on us for our adaptive expertise, the ability to invent novel, effective responses to situations not previously encountered. Routine expertise is fluent pattern recognition, the ability of a practitioner to swiftly recognize something she has seen before and proceed. We would not be able to get through a day without its efficiency. However, physicians also need to be able to respond skillfully and knowledgeably when faced with something unusual or something they have never seen before. This complex skill requires an ability to perceive that a situation is out of the ordinary, a capacity to reason from first principles and deep conceptual understanding.

Expertise does not depend entirely on what is contained in the physician's head, however. A significant dimension of skillful performance is the ability to access and deploy resources in the environment. Knowing who knows or where to find information to inform patient care is arguably as important as possessing the same information oneself.

This brings us to another of the objections to MOC assessment: The closed-book, high-stakes examination does not resemble how physicians function when faced with difficult or unfamiliar clinical situations. There are testing strategies that better reflect what physicians actually do. Charles Friedman, PhD, director of the Health Informatics Program at the University of Michigan in Ann Arbor, has proposed a two-stage knowledge assessment in which the examinee would be asked to answer a question unaided and then be given a second chance to address the same question using external resources. Staged testing would assess the examinee's individual fund of knowledge as the current MOC examination does but would also permit assessment of her ability to recognize when she needs to take advantage of supplementary sources and her skill in accessing and applying relevant information to the clinical problem.

Other objections to the MOC process emphasize its expense and burdensomeness. Both barriers could be largely eliminated if the data on which our performance was assessed were drawn from our actual work, as documented in electronic health records (EHRs). While EHRs hold the potential of generating “big data” on physician performance, interrogating EHRs to discern real quality of care is a significant challenge. In addition, such an approach would not serve individuals who want to maintain their certificates but are not in clinical practice and would not verify that a physician was prepared to deal appropriately with a situation outside of routine practice.

One expense that remains is the process of constructing high-quality, psychometrically valid items for high-stakes tests, which is inevitably expensive. The National Board of Medical Examiners estimates that it costs between $750 and $1,000 per item to develop straightforward multiple-choice questions for use on the Step examinations.

While we have been wrestling with the Maintenance of Certification conundrum here in the U.S., our British colleagues have been developing what they call “revalidation.” Prompted largely by the notorious case of Harold Shipman, a general practitioner found guilty in 2000 of 15 murders of patients and suspected of up to 250, revalidation is a process of assessment against the standards of the “Good Medical Practice” framework. Revalidation began in December 2012. It requires annual formative appraisal; multisource feedback; review of critical incidents, patient complaints and compliments and signoff by an academic or National Health Service supervisor, or both. The General Medical Council is the regulatory body that oversees both licensing and revalidation. Of note, there is no high-stakes, secure examination in the British system. Despite this difference, revalidation seems to be about as popular among British physicians as MOC is here.

Despite MOC's lack of popularity, ACP recognizes the importance of a periodic formal assessment of physicians. ACP's Ethics, Professionalism and Human Rights Committee characterizes the continuing advancement of competence as a professional obligation for all of us who see patients; a mechanism for verification of this competence is an important element in our social contract.

The College has several resources to help its members with MOC. First, there's the Medical Knowledge Self-Assessment Program (MKSAP). Also, the ACP website has very clear instructions on MOC requirements depending on when a physician's current certificate expires. This resource can be found online. In addition, the College hosts a special interest group on MOC that can be found online. This forum provides an opportunity for College members to share information, experiences and strategies for success in MOC. In addition, College leaders use the comments on the special interest group as a source of information when advocating on behalf of members.

The current MOC process needs work. ACP will continue to agitate for a process that minimizes burdens and ensures value to participating physicians.