Monday, December 13, 2010

New Papers on Healthcare Education

These new papers touch on areas that we at Palmer College are involved with. I thought I would post these as a result.

1. Clark ML, Hutchison CR, Lockyer JM. Musculoskeletal Education: A Curriculum Evaluation at one University. BMC Medical Education 2010, 10:93doi:10.1186/1472-6920-10-93

ABSTRACT
Background: The increasing burden of illness related to musculoskeletal diseases makes it essential that attention be paid to musculoskeletal education in medical schools. This case study examines the undergraduate musculoskeletal curriculum at one medical school.
Methods: A case study research methodology used quantitative and qualitative approaches to systematically examine the undergraduate musculoskeletal course at the University of Calgary (Alberta, Canada) Faculty of Medicine. The aim of the study was to understand the strengths and weaknesses of the curriculum guided by four questions: (1) Was the course structured according to standard principles for curriculum design as described in the Kern framework? (2) How did students and faculty perceive the course? (3) Was the assessment of the students valid and reliable? (4) Were the course evaluations completed by student and faculty valid and reliable?
Results: The analysis showed that the structure of the musculoskeletal course mapped to many components of Kern's framework in course design. The course was subject to a high level of commitment to teaching, included a valid and reliable final examination, and valid evaluation questionnaires that provided relevant information to assess curriculum function. Analysis also identified several weaknesses in the course: the apparent absence of a formalized needs assessment, course objectives that were not specific or measurable, poor development of clinical presentations, small group sessions that exceeded normal 'small group' sizes, and poor alignment between the course objectives, examination blueprint and the examination. Both students and faculty members perceived the same strengths and weaknesses in the curriculum. Course evaluation data provided information that was consistent with the findings from the interviews with the key stakeholders.
Conclusions: The case study approach using the Kern framework and selected questions provided a robust way to assess a curriculum, identify its strengths and weaknesses and guide improvements.

2. Botezatu M, Hult H, Fors UG. Virtual Patient Simulation: what do students make of it? A focus group study. BMC Medical Education 2010, 10:91doi:10.1186/1472-6920-10-91

ABSTRACT
Background: The learners' perspectives on Virtual Patient Simulation systems (VPS) are quintessential to their successful development and implementation. Focus group interviews were conducted in order to explore the opinions of medical students on the educational use of a VPS, the Web-based Simulation of Patients application (Web-SP).
Methods. Two focus group interviews - each with 8 undergraduate students who had used Web-SP cases for learning and/or assessment as part of their Internal Medicine curriculum in 2007 - were performed at the Faculty of Medicine of Universidad el Bosque (Bogota), in January 2008. The interviews were conducted in Spanish, transcribed by the main researcher and translated into English. The resulting transcripts were independently coded by two authors, who also performed the content analysis. Each coder analyzed the data separately, arriving to categories and themes, whose final form was reached after a consensus discussion.
Results. Eighteen categories were identified and clustered into five main themes: learning, teaching, assessment, authenticity and implementation. In agreement with the literature, clinical reasoning development is envisaged by students to be the main scope of VPS use; transferable skills, retention enhancement and the importance of making mistakes are other categories circumscribed to this theme. VPS should enjoy a broad use across clinical specialties and support learning of topics not seen during clinical rotations; they are thought to have a regulatory effect at individual level, helping the students to plan their learning. The participants believe that assessment with VPS should be relevant for their future clinical practice; it is deemed to be qualitatively different from regular exams and to increase student motivation. The VPS design and content, the localization of the socio-cultural context, the realism of the cases, as well as the presence and quality of feedback are intrinsic features contributing to VPS authenticity.
Conclusions. Five main themes were found to be associated with successful VPS use in medical curriculum: Learning, Teaching, Assessment, Authenticity and Implementation. Medical students perceive Virtual Patients as important learning and assessment tools, fostering clinical reasoning, in preparation for the future clinical practice as young doctors. However, a number of issues regarding VPS design, authenticity and implementation need to be fulfilled, in order to reach the potential educational goals of such applications.

3. Deom M, Agoritsas T, Bovier PA, Perneger TV. What doctors think about the impact of managed care tools on quality of care, costs, autonomy, and relations with patients. BMC Health Services Research 2010, 10:331doi:10.1186/1472-6963-10-331

ABSTRACT
Background: How doctors perceive managed care tools and incentives is not well known. We assessed doctors' opinions about the expected impact of eight managed care tools on quality of care, control of health care costs, professional autonomy and relations with patients.
Methods: Mail survey of doctors (N=1546) in Geneva, Switzerland. Respondents were asked to rate the impact of 8 managed care tools on 4 aspects of care on a 5-level scale (1 very negative, 2 rather negative, 3 neutral, 4 rather positive, 5 very positive). For each tool, we obtained a mean score from the 4 separate impacts.
Results: Doctors had predominantly negative opinions of the impact of managed care tools: use of guidelines (mean score 3.18), gate-keeping (2.76), managed care networks (2.77), second opinion requirement (2.65), pay for performance (1.90), pay by salary (2.24), selective contracting (1.56), and pre-approval of expensive treatments (1.77). Estimated impacts on cost control were positive or neutral for most tools, but impacts on professional autonomy were predominantly negative. Primary care doctors held more positive opinions than doctors in other specialties, and psychiatrists were in general the most critical. Older doctors had more negative opinions, as well as those in private practice.
Conclusions: Doctors perceived most managed care tools to have a positive impact on the control of health care costs but a negative impact on medical practice. Tools that are controlled by the profession were better accepted than those that are imposed by payers.

This will be the last blog post of 2010. We are all soon heading into our all-too-short vacation break. I wish you all the very best for the upcoming holiday season and for the new year.

Monday, December 6, 2010

Rubrics Continued

Rubrics are comprised of 4 general parts: a task description, characteristics to be rated (usually placed in rows), levels of mastery (usually placed in columns) and a description of each mastery level; that is, of each cell.

The task description is the outcome being assessed or the instructions a student is provided for an assignment. The characteristics to be rated are the skills, knowledge or behaviors to be demonstrate by the student. The levels of mastery should be written clear language. An example might be something along the lines of: exemplary, proficient, marginal, unacceptable. Finally, each cell would contain a description of the what is required for each mastery level.

The University of Hawaii at Manoa (1) suggests that there are 6 steps to developing a rubric:

Step 1: Identify what it is you wish to assess.

Step 2: Identify the characteristics you wish to rate. Here, you would detail the skills or knowledge you plan on evaluating, limiting them to those you feel are most critical or important.

Step 3: Identify the levels of mastery: The authors recommend that you use an even number of categories to avoid the middle category being a sort of “catch all” for scoring.

Step 4: Describe each level of mastery for each characteristic (cell). Start by describing the best work you could reasonably expect to receive for that characteristic, and set that as your top category. Determine what would comprise unacceptable work and set that as your bottom category. Finally develop your mid-categories, ensuring that there is no overlap between any of them.

Step 5: Test the rubric. Apply it to an assignment, and share it with colleagues for their input. You also need to determine the minimal work that you would find acceptable for passing. This could be based on an average, a total score, or achieving a score of, say, marginal on every cell. Or, of course, you could set the standard higher than that.

Step 6: review and revise. It takes work to set these up and ensure they measure what you wish to measure. Rubrics also allow us to share grading expectations, which may be of help; for example, consider how a rubric might be used to assess a technique practical examination.

References
1. http://manoa.hawaii.edu/assessment/howto/rubrics.htm, accessed Dec 3, 2010