American College of Physicians: Internal Medicine — Doctors for Adults ®

Advertisement

Training to competence--so crazy it might just work

From the October 1995 ACP Observer, copyright 1995 by the American College of Physicians.

By Frank Davidoff, FACP

Some students learn faster and more easily than others. Why, then, are most medical education programs equal in length for all comers--the late bloomers and the fireballs? True, an occasional student or resident who gets too far behind his or her classmates is required to spend additional time before moving on, but the opposite--moving learners ahead as soon as they've mastered the material--is practically unheard of. On the face of it, the present educational system, in which all students must progress at the same rate--as if in a lock-step formation--is not the most efficient way for faculty, institutions, or students to use the limited educational resources of time, effort and dollars. Can it be that many students and residents are getting more education than they really need? Is there such a thing as too much of a good thing in medical education?

Efficiency has never been much of an issue in education, particularly medical education. The conventional wisdom has been that you can never learn too much about a subject as enormous as medicine, especially when the lives and health of patients are at stake. It would seem churlish, at the very least, to suggest stripping down medical education merely for the sake of efficiency. Besides, how would such efficiency be defined? And how would you measure it?

Tough times and creative solutions

In this era of growing demands and shrinking resources, however, it isn't unreasonable to take a fresh approach to the concept of educational efficiency. One obvious place to start is to re-examine the possibility that learners should stay in programs only as long as it takes them to meet some predetermined educational goals and then move on to the next step: training to competence, as it is sometimes called. While this concept at first seems outlandish, even a bit crazy, the evidence suggests that when it comes to length of time spent, medical learning is, in fact, quite forgiving.

During both World Wars, for example, medical school training was significantly shortened without demonstrable detriment to the skills and careers of the periods' graduates. In the 1960s and 1970s, the flush of enthusiasm for graduating all those additional doctors so desperately needed to maintain the country's health (O tempora, O mores!) also produced a number of three-year medical school programs, again without apparent harm to trainees or patients. In the present day, several quasi-experimental programs of the Ebert-Ginsberg type (in which the fourth year of medical school and the PGY 1 [internship] years are condensed into one) have started up, allowing at least a selected few of the most capable students to move through the system at an accelerated rate; reports from the field on these programs have so far been favorable (1,2).

Psychometricians have, of course, made their living all along from the variability of student learning, and since educational testing is expensive in terms of time and money they have been forced to care about test efficiency. As the understanding of test operating characteristics has grown, a variety of creative and more efficient testing techniques have evolved, therefore. It is now clear, for example, that if a test is sufficiently reliable, you can confidently identify higher-scoring students by first using a short version of it, thus saving time, sweat and tears for both students and faculty. Only students who don't do well on the initial short version need then go on to more extensive testing, which establishes their performance level with greater accuracy and confidence.

Now if psychometricians working in their microcosm can tailor the evaluation system to fit the student, why can't the rest of the educational system follow suit? It seems only logical to define a set of increasing levels of competence we want students and residents to achieve at the various stages of their professional development, assess their achievement frequently against these criteria, then send them on their way to the next level of training once they meet each standard.

Measuring the unmeasured

And why stop with students and residents? Physicians in practice learn--from reading, from courses, but most of all from the experience of taking care of patients and reflecting on that experience. Common sense, plus much anecdotal experience, tells us that the competence of seasoned clinicians grows with time. (We should probably call such professional development "practicing toward higher competence" rather than "training to competence," since, in theory, there is no easily identified ceiling to the level of competence the best of the best clinicians might achieve.) Paradoxically, practicing clinicians' scores on repeat board exams tend to decrease over time (3), which conventional wisdom attributes to the erosion of practitioners' competence. In view of the demonstrable opportunity for continued learning provided by medical practice, that explanation seems unconvincing. Could it be that board exams simply don't measure many of the more important elements that make up advanced clinical competence? Weak test validity, rather than deterioration of skills, may account for much of the observed change in practitioners' board scores.

In thinking about training to competence, then, it is essential, first, to recognize the reality that when we talk about "competence" we are using the term loosely. We are talking, in fact, about an enormously complex, multi-layered entity, yet to be named, that includes knowledge (knowing), competence (knowing how) and performance (doing), not to mention critical but even more subtle personal, professional and humanistic dimensions (see, for example, references 4, 5).

Second, we must also acknowledge the reality that we are not yet very good at measuring high-level medical competence--the most important thing that medical education produces. Yes, many medical schools have their own assessment programs that serve the traditional "lock-step" system well enough; yes, we have widely accepted standardized national licensing exams; and yes, we have a highly credible system of specialty board certification (in which eligibility requirements are probably as important as the written exams). But despite the enormous efforts of many capable people over many years, the clinical skills evaluation tools that would be critical for the successful operation of a tailored medical education system simply do not exist, at least not at the "high stakes" level of validity and reliability we would need for the purpose.

Indeed, the creators of even high-level clinical assessment systems would be the first to admit that while the existing instruments measure some meaningful elements of competence, they fall far short of measuring the essential totality of high quality medical competence. We know, for example, that roughly 40% of medical interns can pass the written portion of the internal medicine board certification exam (6). But no one would seriously claim on the basis of those scores that they are ready to go out and practice medicine.

Training to competence would also need to face the third and daunting reality of logistics. How could you plan--hire faculty, line up patients, make up class or lab schedules, or put together "coverage" for clinical services--if you were never certain how many students or residents would be arriving for your course or rotation, how long they would be staying, or when they would be leaving? The mind reels at the thought. Moreover, such a fluid system would deprive students and residents of the opportunity to learn from each other, develop mutual support systems, and experience camaraderie and class spirit; such a system would force them to function more as lone individuals, as do graduate students in other disciplines.

A fourth reality of training to competence would be the need to determine what level of competence students would have to achieve before ending one course and moving on to the next. A system in which they needed only to perform at a "lowest common denominator" level, consistent with safety but nothing more, would be neither practical nor credible; it would likely lead eventually to mediocrity, superficiality and a self-reinforcing downward spiral of "eroding goals" (7). By contrast, a system in which all students were expected to demonstrate extremely high levels of performance before continuing on at first sounds attractive, because it could promote the development of a more elite profession overall. But raising the bar this high would be possible only at the cost of sacrificing many perfectly competent students who could not clear these "elite" hurdles but who would be solid practitioners if allowed to complete their training. A standard somewhere in the middle would certainly be more acceptable than the "low ball" option, but would not support the pursuit of excellence. Specialty certifying boards have struggled for decades with this quandary, moving slowly but progressively from the "elite" standard of their early years to the present standard in the middle range. Neither standard has ever seemed entirely satisfactory; perhaps there isn't an optimal one.

All of this is not to say that streamlined medical education systems, systems that train to competence, might not happen in the fullness of time. It's just that presently the obstacles are formidable; indeed, the very idea seems a little crazy--as once were ideas like democracy, moon shots, computers ...

Frank Davidoff is Editor of Annals of Internal Medicine.

References

1. Ebert R, Ginzberg E. The reform of medical education. Health Aff. (Millwood). 1988; 7:5-38.
2. Thompson JS, Haist SA, De Simone PA, Engelberg J, Rich ED. The accelerated internal medicine program at the University of Kentucky. Ann Int Med.1992; 116:1084-1087.
3. Ramsey PG, Carline JD, Inui TS, Larson EB, LoGerfo JP, Norcini JJ, Wenrich MD. Changes over time in the knowledge base of practicing internists. JAMA. 1991; 266: 1103-1107.
4. Levinson W, Kaplan C, Williams G, Clark WD, Williamson P, Lipkin M Jr. What is an expert in medical interviewing? J Gen Intern Med. 1993; 8:713.
5. Day RP, Hewson MG, Kindy P Jr, van Kirk J. Evaluation of resident performance in an outpatient internal medicine clinic using standardized patients. J Gen Intern Med. 1993; 8:193-198.
6. Schumacher CF. Validation of the American Board of Internal Medicine written examination. A study of the examination as a measure of achievement in graduate medical education. Ann Int Med. 1973; 78:131-135.
7. Senge PM. "The Fifth Discipline. The Art and Practice of the Learning Organization." Doubleday, New York, 1990, pp. 383-384.

This is a printer-friendly version of this page

Print this page  |  Close the preview

Share

 
 

Internist Archives Quick Links

MKSAP 16 Holiday Special: Save 10%

MKSAP 16 Holiday Special:  Save 10%

Use MKSAP 16 to earn MOC points, prepare for ABIM exams and assess your clinical knowledge. For a limited time save 10% when you use priority code MKPROMO! Order now.

Maintenance of Certification:

What if I Still Don't Know Where to Start?

Maintenance of Certification: What if I Still Don't Know Where to Start?

Because the rules are complex and may apply differently depending on when you last certified, ACP has developed a MOC Navigator. This FREE tool can help you understand the impact of MOC, review requirements, guide you in selecting ways to meet the requirements, show you how to enroll, and more. Start navigating now.