The instructional design world, focused as it is on training in business, has done a better job of defining educational outcomes, in my opinion, than we in medicine have. In medicine, we seem wedded to a clinical trial mentality. Not really in terms of study design - the randomized controlled trial is just as much a gold standard in education as it is in medicine - but in terms of outcomes. Just as most biomedical research has been very disease-oriented in its outcomes (lowering a hemoglobin A1c level or improving a physiologic parameter rather than lengthening somebody's quality life), educational research in medicine has been focused on proxy measures - student evaluations and post-intervention knowledge testing.
There are many examples in medicine where teaching the "process" is the goal, rather than improving specific knowledge. History taking and physical examination are two easy examples. Evidence-based medicine - looking for research-based answers to clinical questions, assessing their usefulness and applying the conclusions back to the patient - is a really good example. In the two decades or so that I've been teaching EBM, I have seen the community of EBM teachers (and others) bemoan the lack of any higher order evidence for teaching outcomes. One tool for getting us beyond those more basic outcomes is an evaluation model for training in EBM. I propose the Kirkpatrick Training Evaluation model. Others have too, I just wanted to blog about it...
The Kirkpatrick scheme comprises, in its original form, four levels of outcome after a training intervention:
We tend to adequately cover the first two outcomes in educational research. It's the last two that are harder. In business, process change can be observed in a factory worker visibly changing how they assemble a widget. In the military, soldiers can be observed utilizing different skills when responding to a threat. In medicine, how do we observe the process of identifying a clinical question, searching for the answer, appraising the research found and applying it back to the patient? Those are primarily "cognitive behaviors" (if you will...) and are hard to observe without intervening in some way - asking the learner what they did, constructing automatic measurements like hits on a web site, etc. In all of those cases, objective, non-biased observation is difficult. It gets even more difficult if you are inclined to measure changed clinical processes as the outcome - treating a given condition in a certain way (prescribing aspirin after heart attacks, etc.). These sorts of outcomes are proven to work "on average" for the patients in the relevant study. However, because as physicians we often need to apply evidence beyond the usual narrow confines of study inclusion criteria, a simple increase in the percentage of patients in whom we prescribe aspirin may be confounded by each patients' individual characteristics, the contribution of multiple socioeconomic determinants and a whole host of other effects.
Level four "results" are even harder to nail down. What is the result that counts for the patient? You might include the proxy outcome of "prescribed aspirin" at this level, but isn't the ultimate result we want an improved outcome for the patient? Imagine, then, the list of potential confounders and biases that could creep into any educational research study that looks at the outcome of patients as a result of an educational intervention - even if that intervention is at the level of the currently-practicing physician...never mind if the learner is a medical student.
These are difficult, but not impossible, issues to address in medical education, and the first step requires only that we attempt to move beyond level 2. Introducing some measure of process change after educating students in evidence-based medicine is what's needed. Even if the process measure is simulated - as in these evaluations using simulated clinical examinations - it is a start.
A great review of expertise and its relationship to evidence.
There is a lot about math that gets in the way of Evidence-based Medicine. Arguably the most useful methods to express some of the results of clinical research data - the Number Needed to Treat and the Likelihood Ratio - are somehow seen as arcane and complex mathematical problems for many medical learners. I'm told (usually by my wife, but also by others) that I have lost perspective from having taught this material for so long, but I think it is a bigger problem. In his article "Physician numeracy as the basis for an evidence-based medicine curriculum," Rao also notes this problem - there are a set of medical learners that are just not very good with numbers. And it's not just the weirder aspects of probability that confound learners...when we're calculating NNT and LR, we're only talking about algebra. But more than the calculations, I don't think we spend enough time talking about what we are using the numbers for. There's such an emphasis on just knowing the formulas for the test, we don't spend time on the "proofs" - the arguments for why these numbers are used. The students don't (or can't) spend time trying to understand them, and the faculty eventually give up trying to teach them.
However, I have also encountered examples of the opposite problem. One of the more frequent arguments I get as a course director is about rounding of these numbers - usually as a result of an incorrect answer on an assessment. Math majors can circumvent some of the sensitivity/specificity/predictive value calculation problems I give them using estimation techniques that were certainly not part of my math education. I once had a resident publically evince his frustration with the Bayesian approach to diagnostic testing using likelihood ratios and applying them to estimates of pre-test probability. We were discussing methods of determining a patient's pretest probability ranging from population prevalence to probability developed by a clinical decision rule to "clinical impression" (categorized as likely (80-90% pre-test probability), unlikely (10-20% pretest probability, and intermediate (50% pretest probability). The idea that we would calculate a likelihood ratio to even 1 significant figure and then apply it to a "clinical impression" guesstimate using a completely different order of magnitude of pre-test probability was too much for this computer-science-and-mathematics-trained family medicine resident.
I believe there is a middle ground here, but as with most middle grounds, it's not well defined and can feel a little too much like compromise. To use evidence-based medicine best, we require a deeper understanding of the meaning and derivation of the numbers combined with some common sense about the application of the numbers. We need a comfort with numbers (both algebraic and statistical) so that we can speak the language of the science that is an essential part of our discipline. We should recognize when the details of the numbers "don't add up" and the circumstances in which those details are improtant. We should also recognize when the numbers are being used simply for illustration to aid in decision making. Math is a primary tool in evidence-based medicine, but the focus is to make better decisions for our patients. Sometimes the details distract us from this goal.
These have been recommended to me recently as good videos about disparities and their impact on development and the imperative in clinical and public healthcare to address them.
A few thoughts about this video:
A few thoughts about this summary video:
There's more to life than your genes...Genetics education has really exploded in medical education. Despite all its importance and the real promises it holds for use to tailor therapies and understand risk better, I'm more excited about the ideas behind epigenetics. The idea that our nurture (social history, development, family) not only influences but may fundamentally alter our nature (the expression of our genetic code) reinforces the essential role of intelligent, capable and caring physicians that know well their patients, their patients' families and their patients' community. That, dear readers, is a major way that Family Medicine improves the Public Health.
A colleague recently sent me the following headline from Yahoo! Health:
http://health.yahoo.net/news/s/hsn/your-gut-bacteria-may-predict-your-obesity-risk (do me a favor and wait on clicking that link until you've read the rest of this post...)
Now, he was just sharing information - he's interested in this field - but I'm teaching evidence-based medicine to the medical students this month, and my skepticism levels are turned to 11 (from their usual level of 9...it's probably not safe to have these levels pegged all the time...).
While reading over an analysis of decades-old studies of LSD as a treatment for alcoholism last week, I found that the so-called number needed to treat was 6 to prevent alcohol misuse. In other words, treat six people and one would benefit.
Golly - it's only taken...what...20 years to make headlines?