Are Fourth-Year Medical Students as Prepared to Manage Unstable Patients as They Are to Manage Stable Patients?

 

Abstract

Purpose

To evaluate the fourth-year medical student’s assessment and management of an unstable patient.

Method

The authors compared the performance of fourth-year medical students in a clinical performance examination (CPX) across a spectrum of simulated stable conditions as compared with a case of ST-elevation myocardial infarction (STEMI). All fourth-year medical students at the Medical University of South Carolina participated in an eight-station CPX. Student performance was graded as the percentage of correct steps performed according to checklists developed through a modified Delphi technique. Repeated analysis of variance was performed to compare performance on different stations. Data are reported as mean (standard deviation), and P < .05 was considered significant.

Results

A total of 143 fourth-year medical students participated in the study. The percentage of correct actions performed in the STEMI station was 47.8 (9.5), which was significantly lower than all other stations (P < .001). There was no difference in overall performance between any of the other stable encounters. Students performed significantly worse in the physical and management/treatment components of the STEMI station, as compared with history, differential diagnosis, labs/tests, and diagnosis.

Conclusions

Fourth-year medical students were less prepared to manage a simulated STEMI case compared with a range of nonacute conditions. Given the prevalence of coronary artery disease and the necessity of interns to be equipped to handle emergent situations, this deficiency should be addressed in undergraduate medical curricula.


Each year in the United States, approximately one million people have a myocardial infarction, more than 600,000 people are hospitalized for chronic obstructive pulmonary disease exacerbations, and 11% to 42% of medical inpatients have delirium. Interns are frequently the first line of care during the initial assessment and management of unstable patients with these conditions as well as during in-hospital cardiac arrest events, but their confidence and performance in these critical situations is poor. Recent research has highlighted that undergraduate medical education is not adequately preparing students to manage unstable patients., Furthermore, the licensing exams administered to medical students, including the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS), do not test whether medical students are prepared to address the life-threatening situations that they will encounter during their intern year and beyond., The stated goal of licensing exams is “to ensure that they [students] have the competencies they require for practice.” Although evaluating the ability to assess and manage unstable patients is not a stated goal of the USMLE Step 2 CS exam, it is important to evaluate whether medical students are being prepared to deliver safe, quality care in situations that involve cardiac, pulmonary, and neurologic urgencies and emergencies.

The clinical performance examination (CPX) is designed to measure medical student competence in basic clinical skills and prepare students for the USMLE Step 2 CS. At the Medical University of South Carolina (MUSC), we administer the CPX as a summative assessment at the beginning of the fourth year of the medical school curriculum. The CPX’s clinical scenarios are specifically designed to (1) represent a variety of the conditions and diagnoses that students are required to learn in clinical training; (2) measure competency in clinical reasoning, physical examination, and patient communication skills; and (3) prepare students to pass the USMLE Step 2 CS exam. Prior to the 2010–2011 academic year, the MUSC CPX did not evaluate students’ initial assessment and management of an unstable patient. In this study, we sought to determine whether there is a quantifiable difference in fourth-year medical students’ performance in evaluating and managing a simulated case of ST-elevation myocardial infarction (STEMI) as compared with a variety of stable medical conditions assessed in our CPX. We hypothesized that students would perform better in the encounters with stable patients.

 

Method

This study was deemed exempt by the institutional review board of MUSC because all students participated in the same examination format and content (i.e., no intervention), and all data provided to the research team were deidentified. We included all fourth-year medical students who took an eight-station CPX exam immediately after the 2010–2011 academic year.

In July, 2011, we administered the standard seven-station exam along with an added eighth station of a patient with acute chest pain having a STEMI. Because this was the first academic year that the STEMI station was included in the CPX, we did not include the students’ performance on this station in their grades or in their transcript. Students were allowed to review all video recordings of their performance during the CPX, which is the standard practice at MUSC. Students who failed any of the other seven stations remediated the exam, and all subsequently passed. No one remediated the STEMI station, regardless of their performance.

Performance environment

All CPX simulations took place in settings that replicate a patient exam room at the MUSC Medical Center (for nonemergent scenarios) or an emergency room bay (for the unstable patient scenario). For the unstable patient scenario, the monitors—the bedside monitor, medical gases monitor (e.g., oxygen), and so forth—imitated those in use at MUSC’s medical center, as did the available medications, intravenous (IV) fluids, IV lines, facemasks, and other supplies. Trained standardized patients (SPs) were used for the simulation sessions at the standard seven stations. For the STEMI station, the simulated patient was a SimMan 2G manikin and SimMan software (Laerdal, Inc., Stavanger, Norway), and an SP was trained to play the role of a nurse. An SP in an adjoining room role-played the voice of the simulated patient through the manikin and responded to questions according to a standardized script. The student was told to assume the role of an intern in the emergency room and act as the responding physician. To increase realism, the SP playing the nurse performed all actions that the student–doctor ordered; these included attaching vital sign monitors, checking vital signs, giving IV fluids or medication, and drawing labs. Student performance at each station was graded by the SPs according to checklists developed through a modified Delphi technique (as described below). The exact cases and medical diagnosis are blinded for publication to protect test integrity.

Checklist creation

The grading checklists for the CPX’s seven nonacute stations were created by the multispecialty group of MUSC faculty clinician–educators responsible for the development, oversight, and ongoing assessment of the CPX program. We constructed the STEMI station checklist based on current American Heart Association consensus publications and with review by a group of cardiologists at MUSC according to previously reported methods. The development process for all station checklists proceeded through three stages: (1) generating suggested checklist items; (2) endorsing and removing checklist items until consensus was reached; and (3) receiving feedback about the categories into which the steps would be arranged (e.g., history, counseling, treatment) and the order in which the checklist steps should be presented for grading, with the goal of anticipating the approximate order in which students would accomplish the items. The STEMI Checklist items were grouped into six categories: history; physical exam; differential diagnosis; labs and tests; diagnosis; and management. The STEMI Checklist is presented as Appendix Appendix1.1. To maintain test integrity, we have not provided the checklists for the seven nonacute stations.

Orientation for students

There are several points to note about students’ familiarity with the information and skills tested in the STEMI station and the construct of testing using mannequin-based simulation. Targeted didactics delivered during the third-year internal medicine clerkship at MUSC address the approach to the unstable patient with chest pain, with a particular focus on the differential diagnosis of acute coronary syndrome (ACS), aortic dissection, esophageal rupture, pneumothorax, tamponade, and pericarditis. Additionally, we use mannequin and SP-based simulation extensively throughout our medical school curriculum. At the beginning of their first year, students are given an extensive orientation to the SimMan mannequin and have at least four simulation exercises in which they interact with the mannequin during their first and second years.

Prior to the CPX, we gave the students a thorough orientation to the confederate role of the SP nurse and informed them that the nurse would be available to perform any tasks that they directed (e.g., placing monitors, drawing labs). We advised the students that the mannequin would have a live person’s voice supplied by a trained SP from the control room.

Orientation for the two SP roles

We trained the two SPs in three 1-hour sessions with two members of the research team (M.D.M. and M.C.H.; the latter is the person responsible for putting all SPs through a standardized training program). The actors we used had each been a part of the SP program for several years. In addition to the standard training for SPs at MUSC, we gave these team members extensive training on the use of the grading checklist. Prior to grading students, both SPs for the STEMI station practiced using the checklist during 10 live simulations when they were in the role of the nurse SP. We compared the SP’s grading results of the practice simulations with videos of those sessions in order to assess reliability. During the first five sessions, questions emerged about the checklist that we did not anticipate during the initial orientation. For instance, the SP would ask what constituted completion of a certain item on the checklist that could be performed in multiple ways, such as obtaining patient history. In the last five sessions, the SP’s in-scenario grading had perfect agreement when compared with the video review by our team that constructed the checklist. To ensure consistency, one SP played the voice of the patient and one SP played the role of the nurse for all STEMI simulations during each day of the CPX.

Checklist implementation

During the simulation sessions, the nurse SP recorded whether the student completed checklist items on the grading checklist. Concerning the checklist items involving the differential diagnosis, ordering of labs and test, statement of definitive diagnosis, and initial patient management (items 23–42 in Appendix Appendix1),1), the student verbalized these to the nurse SP in the following manner. The nurse SP would ask the student a series of questions to standardize how the student was given an opportunity to give feedback and display his or her thought process. After the student appeared finished with the focused history and physical, the nurse SP would attempt to elicit a differential diagnosis, asking: “Doctor, what do you think is going on with this patient?” If the student did not answer or said “I don’t know,” the nurse SP would repeat the question after roughly one minute.

After the student provided a leading or differential diagnosis, the student was given a prompt to think about the next step of ordering labs and tests. The nurse SP would ask, “Is there anything that you would like for me to do? Can I get you anything?” If the student did not answer or said, “I don’t know,” the nurse SP would repeat the question after roughly one minute. If the student ordered labs or tests for which results would be returned during the scenario (e.g., complete blood count, chemistries, troponin, 12-lead electrocardiogram, or chest x-ray), the nurse SP would say, “I think that we have those results already” and give them to the student in paper form. Then, after the student had time to review the results, the nurse SP would ask, “Is there anything else that you would like for me to do? What can I do to help you?” We intended to give each student a prompt to think about the next steps in stabilizing the patient prior to definitive management. If the student ordered something for which a set of results was not available, then the nurse SP would say, “I’ll make sure that gets ordered.”

Statistical analysis

We calculated the percentage of correct actions completed by each student in each CPX station by dividing the number of items marked as accomplished by the total number of steps on each scenario’s checklist. We performed a repeated-measures analysis of variance (ANOVA) to compare different stations within the CPX. We checked the assumption of sphericity using the Mauchly test and used a Bonferroni correction in the pairwise comparisons following a significant overall test result. To evaluate performance on different components of the STEMI Checklist and to see whether certain areas accounted for the poor performance, we performed a one-way ANOVA on performance scores by checklist component, which were History (11 items), Physical Exam (11 items), Differential Diagnosis (1 item), Labs/Test (3 items), Diagnosis (3 items), and Management/Treatment Plan (13 items) (see Appendix Appendix1).1). Data are presented as mean percentage (standard deviation [SD]). In the analysis, we defined significance as P < .05. All analyses were conducted in SPSS version 18.0 (IBM Corporation, Armonk, New York).

 

Results

A total of 143 fourth-year MUSC students completed the CPX in July 2011. The mean percentage (SD) of correct actions in each station was as follows: gastrointestinal, 72.3 (12.2); neurologic, 72.3 (15.1); noncardiac chest pain, 72.0 (11.0); endocrine, 68.2 (12.6); pulmonary (adolescent), 66.7 (12.6); obstetric, 66.0 (13.1); and urologic, 62.9 (15.1) (see Figure Figure11).

An external file that holds a picture, illustration, etc.
Object name is acm-89-618-g001.jpg

Medical student performance in stable scenarios compared with an ST-elevation myocardial infarction (STEMI). In July 2011, senior medical students (N = 143) at the Medical University of South Carolina performed significantly worse in the STEMI station of the clinical performance exam, as compared with the other seven stations, all of which covered nonacute patient presentations. Data reported as mean ± SD; CP indicates chest pain; Pulm, pulmonary. *P < .001 versus STEMI.

Performance on the STEMI encounter, 47.8 (9.5), was significantly lower than that on all of the other stations (P < .001). There were no statistically significant differences in performance among the other seven stations. In the STEMI station, students performed significantly worse on the physical exam, 37.7 (14.6), and patient management/treatment, 36.4 (14.5), components of the grading checklist as compared with the other areas: history, 59.43 (18.1); differential diagnosis, 52.8 (50.1); labs/test, 75.5 (23.3); and diagnosis, 63.2 (24.5) 

An external file that holds a picture, illustration, etc.
Object name is acm-89-618-g002.jpg

Comparison of medical student performance in components of ST-elevation myocardial infarction (STEMI) grading checklist. Senior medical students (N = 143) at the Medical University of South Carolina performed significantly worse in the Physical Exam (Physical) and Patient Management (Mgmt) portions of the STEMI grading checklist, as compared with History, Differential Diagnosis (DDx), Labs/Tests, and Diagnosis. Data reported as mean ± SD. *P < .001 versus Phys Exam;  P < .03 versus Management.

 

Discussion

The clinical years of training that prepare a learner for the transition from medical student to a first-year resident are crucial. There is a large chasm that educators must cross in order for students to be prepared to handle the emergencies that they will face as interns and residents. In this study, we found that fourth-year students’ assessment and initial management of a simulated unstable patient with chest pain who was having a classic presentation of a STEMI was poor during a CPX. This poor performance was in stark contrast to students’ consistently better performance with seven SP scenarios involving nonacute conditions that covered a wide range of diagnoses in various organ systems.

When we investigated the components of our students’ poor performance, we found that scores on the physical exam (including obtaining vital signs) and management/treatment plan were significantly lower than the other areas (history, differential diagnosis, labs/tests, and diagnosis). This highlights two potential areas for improvement. First, students may be unaccustomed to directing other members of the health care team to perform any action, such as placing monitors to obtain vital signs. Second, in the CPX’s nonacute stations, students made a plan, but they did not have to do anything in real time that might affect the patient’s condition. This is very different from the STEMI station, in which the patient complained of chest pain and asked the student for immediate help. The student was expected to begin initial therapies and perform multiple tasks, ranging from administering supplemental oxygen and ordering appropriate labs to communicating with the cardiology service and ordering initial pharmacologic therapies for an ACS.

As noted above, each year in the United States millions of patients are treated for unstable cardiac, pulmonary, and neurologic conditions. As interns are frequently the first line of care in these and other emergencies, it is essential that medical schools train students to be able to adequately respond during these crises. Although there is often backup from upper-level residents and attending physicians in these emergency situations, a recent study has reported that interns feel unprepared in dealing with the assessment and initial management of acutely ill patients. When employing unstable patient scenarios in training and assessment, medical schools should focus on areas where breaches to patient safety in high-stakes medical events often occur, such as assessment, diagnosis, communication, and patient management. The deficits in patient management that we report here and that others have previously reported,,,,, are of particular concern given that national guidelines from the Clerkship Directors in Internal Medicine (CDIM) state that medical students should have the skills to discuss, diagnose, and develop a treatment plan for a patient with an acute myocardial infarction, and nearly all schools have in place curricula that address emergency and acute conditions. Our results are consistent with those of Nikendei and colleagues, who found that ward rounding skills in patient assessment and management are insufficiently developed at the end of the fourth year of medical school. The authors suggest that poor ward rounding skills are secondary to a lack of opportunities to independently round on patients and insufficient supervision of student assessment and management of acutely ill patients. There may be a myriad of reasons why these deficits exist, but it is clear that many trainees beginning their final year of medical school are not prepared to assess and begin treatment for acute myocardial infarctions.

Several studies have noted that simulation-based medical education (SBME) can improve the acute care skills of medical students and residents. Steadman and colleagues reported that SBME was superior to problem-based learning (PBL) in terms of student performance across a wide range of unstable patients when students were tested at the end of a weeklong SBME training course versus a PBL course. Additionally, McCoy and colleagues reported that SBME was superior to didactic instruction in training medical students to assess and manage simulated cases of an acute myocardial infarction and anaphylaxis. They reported results of 93% correct steps during an objective structured clinical exam for the SBME-trained students and 71% correct steps for the didactic-trained students. The scoring checklist that they used for the ACS scenario was similar to the one that we used, although they used weighted scoring while we did not. Their results showed a very significant effect of SBME, but several notes must be made. First, the follow-up testing in both studies was either one day or one week after initial training. Second, the students in their studies were fourth-year medical students who had self-selected to take an emergency medicine elective that involved this training and testing, whereas we report performance across an entire class of fourth-year medical students who had received didactic training months prior to the testing date. Thus, the findings that SBME produces excellent performance in caring for the unstable patient, including those with simulated ACS, needs to be tested over time and over larger cohorts of students.

The major strength of our study is that it included an entire medical school class, and thus represents the full spectrum of students and their skill level in their final year of medical school. This group of students represents a reliable historical control group for testing the type of intern that MUSC’s curriculum produces in both knowledge and skill. To our knowledge, no other studies have examined student performance in acute care patient treatment in this manner. When considering the implications of our findings, however, we must ask whether there is any clinical importance related to the educational problem under consideration. In response to this, numerous studies have shown that medical students and residents are deficient in their abilities to interpret 12-lead ECGs and perform a basic cardiac exam. Furthermore, Jayes and colleagues demonstrated that errors in interpreting ECGs often lead to increased morbidity and mortality related to the misdiagnosis of life-threatening cardiac events, including ACS. Therefore, it seems appropriate to surmise that improved skills at interpreting ECGs would lead to improved diagnosis of life-threatening cardiac events, which in turn would lead to better patient outcomes. Although this has not been definitively proven, we attempted to take this concept one step further and look at the entire spectrum of assessment and initial management of a patient suffering a STEMI as compared with nonacute patient encounters.

There are several weaknesses in our study. First, we report observational data rather than testing an educational intervention. Although we believe that results from the entire medical school class serve as a reliable historical baseline and clearly demonstrate a deficiency in our educational process, these data do not shed light on the solution to the problem. Second, in the seven nonacute medical condition stations the patient role was played by a trained SP. In the STEMI station, the patient was a mannequin, and an SP in another room gave responses through a speaker in the mannequin. Although this allowed for a conversational nature to the encounter, the absence of nonverbal clues that could have helped the student in the assessment and diagnosis phase may have presented a barrier to better student performance. It is possible that the students did especially poorly on the physical exam and management portions of the encounter because they were not directly interacting with a human. Finally, our CPX is given in the first quarter of the fourth year of medical school in order to allow time for remediation for anyone who does not pass. Many students take critical care, anesthesiology, or emergency medicine rotations during their fourth year, all of which may greatly aid in learning a structured approach to the unstable patient. We demonstrated a deficiency in students’ clinical competence after three years of training, but did not assess their competency at the time of graduation. Finally, this study was only conducted at one medical school, which limits generalizability.

Future research is needed to address a number of issues. The effects of a standardized SBME curriculum on student performance across large groups of students, possibly an entire medical school class, and over time should be investigated. The effect of a mannequin versus a human SP on student performance in the approach to the unstable patient should be evaluated. Finally, future directions concerning this research may have impact on expectations by licensing and credentialing boards. The CDIM currently has higher and more explicit expectations for what should be taught and assessed in every internal medicine clerkship concerning acute care of unstable patients than what appears to be intended by the National Board of Medical Examiners and Accreditation Council for Graduate Medical Education in their expectations and requirements for interns in internal medicine.,

 

Conclusions

In conclusion, we demonstrated that fourth-year medical students performed significantly worse in the assessment and management of a simulated ACS case as compared with a range of various stable patients. Given the necessity of interns to manage urgent and emergent situations, further research should address the best pedagogical method for preparing students to care for the unstable patient. These scenarios may be more appropriate for assessing whether students are ready to transition to the role of doctor. Future research will investigate a wide range of unstable conditions in order to determine whether these findings represent an isolated deficiency or are representative of a larger gap in medical education for acute care training.

Fuente: 2014 Apr; 89(4): 618–624.    doi: 10.1097/ACM.0000000000000192 

Compartir

Leave A Comment