Monday, January 30, 2012

The IRB Application

We are, gratifyingly, seeing more and more research under development by our non-research faculty. And many of those who are conducting classroom research are using surveys to do so. Last week, I wrote a blog post about how to avoid certain problems when developing questions. This week, I want to focus on the actual IRB application.

As you all are aware, before you can conduct any human subject research, you must have submitted an application to the IRB and have received an approval. This approval may indicate that your project is exempt, or that it was approved after expedited or full board review. To make that happen, you need to complete the IRB application (http://w3.palmer.edu/irb/Forms.htm), and submit it to the college Human protections Administrator (who currently is me). But I have noted that there are times when the application is not really completed very well; it will take more than just a few minutes to complete it adequately. Note that certain questions do require some time. Notably, these includes questions 5, 8, 9 and 11. Each of these questions has multiple parts. And each requires some thought- you should think through all the issues that will arise when you gather data.

Question 5: This has parts a through i, and it addresses the project methodology. You wil be asked to provide an hypothesis (for more detailed forms of research) or goals. What are you trying to do in your project? You will need to describe your methods, and it will not be sufficient to simply provide a one-word answer such as “survey.” Of whom? Via paper or internet? Provide detail here. Who will be surveyed? How are they to be selected? How will your data be analyzed? Will you be using descriptive statistics or more rigorous statistics? How long will the study last? HO w will you protect participant confidentiality? How will you use the study results- for a paper? For presentation at ACC-RAC? All of these questions require some thought and some detail in being addressed.

Question 8: This asks about the participants, what are the inclusion and exclusion criteria and who is to be involved. You will be asked about age range, gender and ethnicity; if students, your answer should reflect that. You will also be asked to describe your relation to the students, as well as risks or benefits- for surveys, normally there are few to note.

Question 9: This deals with your data sources. For surveys of students, students are obviously the source of the data. So, the question will ask you to note how you are de-identifying the information to protect student anonymity. And if you are involving patients, it asks about HIPAA protections.

Question 11: This addresses how you plan on recruiting your participants and how you plan on collecting informed consent. In most surveys, many of which are found exempt, student willingness to participate is seen as consent. But in some research, you must have an informed consent form for participants to sign. We can help you determine which might be the case.
Finally, please note that the final page requires a supervisor’s signature. We cannot process without it.

I stand ready to help prepare this document, as well as plan research. Just let me know.

Monday, January 23, 2012

Developing Survey Questions

A good many of our faculty have used surveys in their classroom. Each time they do, time must be spent constructing the questions that are asked, and there is, as might be expected, a great deal to take into account when writing those questions. Here are issues to consider when writing survey questions.

1. Wording items: Sometimes the answers to your question depend on whose opinions are under consideration. Nardi notes that “if you ask respondents to agree or disagree with ‘Merit raises should be eliminated for all workers” you will get a different answer that if you asked “I feel that merit raises should be eliminated for all workers.” His point is that the first construction is more about a general belief while the second is much more personal and asks the person to consider for him or herself the answer. This suggests that as you write questions, you would mist in an occasional item asked in a different way and compare its results to other similar questions. You should, however, generally stick to writing either with “I” or “You” but not switch later to more general questions. A second suggestion is to avoid negatives in sentences, because for some people they will not know whether agreeing with a negative sentence means they are disagreeing with it.

2. Statement directions; Mix the direction to your statements so that not all the answers for a specific set of opinions lead to “agree or all lead to “disagree.” You should word some questions sot that people must disagree with some and agree with others. An example might be to ask “Staying up late the night before a test helps a lot,” later followed by “Getting a good night’s rest before a test is helpful.” You could not really agree with both of these. Mixing the direction removes what is known as response bias, which occurs when people simply answer most questions the same way by checking, say, “disagree” for all.

3. Always and never: avoid the use of these words; people e rarely always or never feel something about a statement. It is better to phrase such question using choices such as “most of the time” or “infrequently.”

4. Double-barreled items: This is asking a question that actually measures two things at the same time. Such questions often include the word “and.” Consider this: “Do you like ham and eggs?” How do you answer this if you like one, but not the other. Avoid these.

5. Leading questions: You can consciously or unconsciously allow your own personal biases to creep into your survey questions. If you ask “Do you agree that everyone should undergo drug testing on our campus?” you are leading people in a particular direction and suggesting they should agree with you. You should rewrite this as a statement, “Everyone should undergo drug testing on our campus” and include it is a set of questions that have a range of view points.

This is but a small amount of information on proper question development. Taking these issues into account will help give you richer and more meaningful data when you conduct surveys.

References
1. Nardi PM. Doing survey research: a guide to qualitative methods. Boston, MA: Pearson Education, Inc., 2003

Tuesday, January 17, 2012

Qualitative Research

I return to the conduct of qualitative research. Such research has two main differences from the more well-understood quantitative research: (1) it focuses on social and interpreted, rather than quantifiable, phenomenon, and (2) it aims to discover, describe, and understand, rather than to test and evaluate. It looks at very different questions than does quantitative research. Here is one example: consider a project that wishes to examine changes in pain intensity among a group of patients with low back pain. We can, using quantitative methods, collect baseline pain readings using an instrument such as a Numerical Rating Scale (pain rated on a scale of 0-10, with 0 being no pain and 10 be the worst pain imaginable. We can then collect follow-up ratings 2 weeks after treatment. Each person provides their own self-rated pain measures. Patient #1 might have an initial score of 8 and a follow-up score of 4; patient #2 might report the exact same measures. But, does this mean that their experience and perceptions regarding the pain they experience is exactly the same? We cannot know using quantitative methods. Instead, we might conduct a corollary project in which we interview a select group of patients, so that we can have a better sense and understanding of their lived experience with pain. We would have descriptive information to analyze: words, and text. As a result those who conduct qualitative research rarely discuss validity; instead, they discuss credibility.


There are a series of questions one can use to read and interpret papers presenting qualitative research. According to the Users’ Guides to the Medical Literature (1), questions to ask include:

Is qualitative research relevant? Is my question about social, rather than biomedical, phenomenon? Do I want theoretical or conceptual understanding of the problem?

Are the results credible? Was the choice of participants explicit and comprehensive? Was ethics approval received? Was data collection comprehensive and detailed?

What are the results?

How can I apply the results to patient care? Does the study offer helpful theory? Does it help me understand the context of my practice? Does it help me understand social interactions in clinical care?

These are all good questions through which to view the methods and results of a qualitative study.

References
1. Guyatt G, Rennie D, Meade MO, Cook DJ. Users’ guides to the medical literature, 2nd edition. New York City, NY; McGraw Hill, 2008:341-360

Tuesday, January 10, 2012

Three New Articles

Ritenbaugh C, Nichter M, Kelly KL, et al. Developing a Patient-Centered Outcome Measure for Complementary and Alternative Medicine Therapies I: Defining Content and Format. BMC Complementary and Alternative Medicine 2011, 11:135

ABSTRACT

Background: Patients receiving complementary and alternative medicine (CAM) therapies often report shifts in well-being that go beyond resolution of the original presenting symptoms. We undertook a research program to develop and evaluate a patient-centered outcome measure to assess the multidimensional impacts of CAM therapies, utilizing a novel mixed methods approach that relied upon techniques from the fields of anthropology and psychometrics. This tool would have broad applicability, both for CAM practitioners to measure shifts in patients' states following treatments, and conventional clinical trial researchers needing validated outcome measures. The US Food and Drug Administration has highlighted the importance of valid and reliable measurement of patient-reported outcomes in the evaluation of conventional medical products. Here we describe Phase I of our research program, the iterative process of content identification, item development and refinement, and response format selection. Cognitive interviews and psychometric evaluation are reported separately.
Methods: From a database of patient interviews (n=177) from six diverse CAM studies, 106 interviews were identified for secondary analysis in which individuals spontaneously discussed unexpected changes associated with CAM. Using ATLAS.ti, we identified common themes and language to inform questionnaire item content and wording. Respondents' language was often richly textured, but item development required a stripping down of language to extract essential meaning and minimize potential comprehension barriers across populations. Through an evocative card sort interview process, we identified those items most widely applicable and covering standard psychometric domains. We developed, pilot-tested, and refined the format, yielding a questionnaire for cognitive interviews and psychometric evaluation.
Results: The resulting questionnaire contained 18 items, in visual analog scale format, in which each line was anchored by the positive and negative extremes relevant to the experiential domain. Because of frequent informant allusions to response set shifts from before to after CAM therapies, we chose a retrospective pretest format. Items cover physical, emotional, cognitive, social, spiritual, and whole person domains.
Conclusions: This paper reports the success of a novel approach to the development of outcome instruments, in which items are extracted from patients' words instead of being distilled from pre-existing theory. The resulting instrument, focused on measuring shifts in patients' perceptions of health and well-being along pre-specified axes, is undergoing continued testing, and is available for use by cooperating investigators.

Zhang J, Peterson RF, Ozolins IZ. Student approaches for learning in medicine: What does it tell us about the informal curriculum? BMC Medical Education 2011, 11:87 doi:10.1186/1472-6920-11-87

ABSTRACT
Background: It has long been acknowledged that medical students frequently focus their learning on that which will enable them to pass examinations, and that they use a range of study approaches and resources in preparing for their examinations. A recent qualitative study identified that in addition to the formal curriculum, students are using a range of resources and study strategies which could be attributed to the informal curriculum. What is not clearly established is the extent to which these informal learning resources and strategies are utilized by medical students. The aim of this study was to establish the extent to which students in a graduate-entry medical program use various learning approaches to assist their learning and preparation for examinations, apart from those resources offered as part of the formal curriculum.
Methods: A validated survey instrument was administered to 522 medical students. Factor analysis and internal consistence, descriptive analysis and comparisons with demographic variables were completed. The factor analysis identified eight scales with acceptable levels of internal consistency with an alpha coefficient between 0.72 and 0.96.
Results: Nearly 80% of the students reported that they were overwhelmed by the amount of work that was perceived necessary to complete the formal curriculum, with 74.3% believing that the informal learning approaches helped them pass the examinations. 61.3% believed that they prepared them to be good doctors. A variety of informal learning activities utilized by students included using past student notes (85.8%) and PBL tutor guides (62.7%), and being part of self-organised study groups (62.6%), and peer-led tutorials (60.2%). Almost all students accessed the formal school resources for at least 10% of their study time. Students in the first year of the program were more likely to rely on the formal curriculum resources compared to those of Year 2 (p = 0.008).
Conclusions: Curriculum planners should examine the level of use of informal learning activities in their schools, and investigate whether this is to enhance student progress, a result of perceived weakness in the delivery and effectiveness of formal resources, or to overcome anxiety about the volume of work expected by medical programs.

Tschudi-Madsen H, Kjelsdberg M, Natvig B et al. A strong association between non-musculoskeletal symptoms and musculoskeletal pain symptoms: results from a population study. BMC Musculoskeletal Disorders 2011, 12:285 doi:10.1186/1471-2474-12-285

ABSTRACT

Background: There is a lack of knowledge about the pattern of symptom reporting in the general population as most research focuses on specific diseases or symptoms. The number of musculoskeletal pain sites is a strong predictor for disability pensioning and, hence, is considered to be an important dimension in symptom reporting. The simple method of counting symptoms might also be applicable to non-musculoskeletal symptoms, rendering further dimensions in describing individual and public health. In a general population, we aimed to explore the association between self-reported non-musculoskeletal symptoms and the number of pain sites.
Methods: With a cross-sectional design, the Standardised Nordic Questionnaire and the Subjective Health Complaints Inventory were used to record pain at ten different body sites and 13 non-musculoskeletal symptoms, respectively, among seven age groups in Ullensaker, Norway (n = 3,227).
Results: Results showed a strong, almost linear relationship between the number of non-musculoskeletal symptoms and the number of pain sites (r = 0.55). The number and type of non-musculoskeletal symptoms had an almost equal explanatory power in the number of pain sites reported (27.1% vs. 28.2%).
Conclusion: The linear association between the number of non-musculoskeletal and musculoskeletal symptoms might indicate that the symptoms share common characteristics and even common underlying causal factors. The total burden of symptoms as determined by the number of symptoms reported might be an interesting generic indicator of health and well-being, as well as present and future functioning. Research on symptom reporting might also be an alternative pathway to describe and, possibly, understand the medically unexplained multisymptom conditions.

Tuesday, January 3, 2012

A Short List of Good Books for Teachers

Welcome back and happy New Year! As a first post of the new year, here is a short list of excellent resource texts for teachers.

1. Brookfield S. Teaching for critical thinking. San Francisco, CA; Jossey-Bass, 2011

2. Tierney WG. The impact of culture on organizational decision making: theory and practice in higher education. Sterling, VA; Stylus publishing, LLC, 2008

3. Leamnson R. Thinking about teaching and learning: developing habits of learning with first year college and university students. Sterling, VA; Stylus publishing, LLC, 1999

4. Driscoll A, Wood S. Developing outcomes-based assessment for learner-centered education: a faculty introduction. Sterling, VA; Stylus publishing, LLC, 2007

5. Lauer PA. An education research primer: how to understand, evaluate and use it. San Francisco, CA; Jossey-Bass, 2006

6. Dolence MG, Rowley DJ, Lujan HD. Working toward strategic change: a step-by-step guide to the planning process. San Francisco, CA; Jossey-Bass, 1997

7. Stevens DD, Levi AJ. Introduction to rubrics: an assessment tool to save grading time, convey effective feedback and promote student learning. Sterling, VA; Stylus Publishing, LLC, 2005

8. Michaelson LK, Knight AB, Fink LD. Team-based learning: a transformative use of small groups in college teaching. Sterling, VA; Stylus Publishing, LLC, 2004

9. Allen MJ. Assessing academic programs in higher education. Bolton, MA; Anker Publishing, 2004

10. Gillespie KH, ed. A guide to faculty development: practice advice, examples, and resources. Bolton, MA; Anker Publishing, 2002