MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

ABSTRACT: Constructing an adaptive care model for the management of disease-related symptoms throughout the course of multiple sclerosis–performance improvement CME.

BACKGROUND:
Symptom management remains a challenging clinical aspect of MS.
OBJECTIVE:
To design a performance improvement continuing medical education (PI CME) activity for better clinical management of multiple sclerosis (MS)-related depression, fatigue, mobility impairment/falls, and spasticity.
METHODS:
Ten volunteer MS centers participated in a three-stage PI CME model: A) baseline assessment; B) practice improvement CME intervention; C) reassessment. Expert faculty developed performance measures and activity intervention tools. Designated MS center champions reviewed patient charts and entered data into an online database. Stage C data were collected eight weeks after implementation of the intervention and compared with Stage A baseline data to measure change in performance.
RESULTS:
Aggregate data from the 10 participating MS centers (405 patient charts) revealed performance improvements in the assessment of all four MS-related symptoms. Statistically significant improvements were found in the documented assessment of mobility impairment/falls (p=0.003) and spasticity (p<0.001). For documentation of care plans, statistically significant improvements were reported for fatigue (p=0.007) and mobility impairment/falls (p=0.040); non-significant changes were noted for depression and spasticity.
CONCLUSIONS:
Our PI CME interventions demonstrated performance improvement in the management of MS-related symptoms. This PI CME model (available at www.achlpicme.org/ms/toolkit) offers a new perspective on enhancing symptom management in patients with MS

via Constructing an adaptive care model for the manag… [Mult Scler. 2013] – PubMed – NCBI.

A Roadmap for Grantsmanship in Medical Education: Applying the SACRED Principles

If there is one constant challenge in the way we innovate, evolve, and assess continuing medical education, it might be the simple reality that these efforts take resources. In response, medical educators often spend upwards of half of their time trying to find the resources that will allow them to develop, pilot, and produce the educational interventions that they envision. This leads the community down two paths: 1) make do with the resources that are available, or 2) commit the time to ‘perfect’ ones proposal writing and grantsmanship such that funding can be secured and innovation can be pursued.

On the other hand, there are countless organizations tasked with the alternative challenge – they are committed to funding medical education and research that will drive change in healthcare, but they struggle to effectively tie their funding decisions to a rationale assessment of which proposals are most likely to be successful. As a result, there may not be a high fidelity in their funding allocation and over time this inefficiency leads to a depletion of funding and an exacerbation of the problems define in paragraph one.

So how might we overcome theseCME_Grant_Writing challenges such that educational providers may more effectively conceive and construct grant proposals that tell the proper story and such that funding organizations may review, analyze, and fund proposals that they can have confidence will work?

For nearly a decade I have been exploring and promoting a SACRED model for grantsmanship to identify and simplify the critical information needed to be shared. In this post I would like to briefly introduce this model more broadly. Since I originally conceived of the model in ~2004 I have been constantly looking for evidence of its success and/or shortcomings and as I write this post I can whole-heartily support its efficacy…and I can now provide countless examples of how the model has been employed successfully to increase a provider’s funding rates and to simplify a funder’s review and the fidelity of their decisions.

Moreover, in reviewing and funding thousands of educational grant proposals myself, and in successfully writing dozens and dozens of educational grant proposal over the past decade I have come to learn that while the SACRED model serves as a valuable roadmap, success ultimately lies in how the roadmap is employed. I will try to write about this in subsequent posts. For now, let me simply introduce the elements of the roadmap.

The critical elements of grantsmanship in medical education are as follows:

  1. Scientific Rationale: is there expertise and sophistication in the linkages between the identified clinical gaps educational needs,
  2. Audience: more specifically how have you explored and defined the needs and preference of the intended audience of learners (Do you have unique access or experience with these learners?),
  3. Compliance models and track record (self-explanatory for those in continuing medical education),
  4. Results: alignment of assessment methodology and one’s organizational track record,
  5. Educational Design: tell a concise and evidence-based story about the intended interventional models and one’s organizational track record, and
  6. Differentiation: provide a more detailed explanation of how the broader plan will be implemented and what makes your approach unique and valuable (funding justification is critical here).

Early on it was pointed out to me that the SACRED model could just as easily be renamed the SCARED model – and the irony of this is not lost on many. But since the approach is intended to make the planning and funding of innovative and effective medical education easier and less worrisome, I believe the SACRED moniker fits.

Having introduced the 6 key elements of the model, my hope is that you may begin to explore your internal grantsmanship models: do your proposals highlight and emphasize these elements? Or, do your review practices identify and deconstruct these elements? I believe that just this initial self-assessment will provide the community with a valuable exercise.

FWIW, I look forward to hearing what you think about the model, and I certainly look forward to hearing how the self-assessment goes! Feel free to contact me if you are immediately interested in a next level exploration or if you want to understand how to ‘move the needle’ with your internal processes.

At a minimum…I hope this helps all of us in some small way!

Brian

MANUSCRIPT: A feeling of flow: exploring junior scientists experiences with dictation of scientific articles

BackgroundScience involves publishing results, but many scientists do not master this. We introduced dictation as a method of producing a manuscript draft, participating in writing teams and attending a writing retreat to junior scientists in our department. This study aimed to explore the scientists experiences with this process.MethodsFour focus group interviews were conducted and comprised all participating scientists n = 14. Each transcript was transcribed verbatim and coded independently by two interviewers. The coding structure was discussed until consensus and from this the emergent themes were identified.ResultsParticipants were 7 PhD students, 5 scholarship students and 2 clinical research nurses. Three main themes were identified: Preparing and then letting go indicated that dictating worked best when properly prepared. The big dictation machine described benefits of writing teams when junior scientists got feedback on both content and structure of their papers. Barriers to and drivers for participation described flow-like states that participants experienced during the dictation.ConclusionsMotivation and a high level of preparation were pivotal to be able to dictate a full article in one day. The descriptions of flow-like states seemed analogous to the theoretical model of flow which is interesting, as flow is usually deemed a state reserved to skilled experts. Our findings suggest that other academic groups might benefit from using the concept including dictation of manuscripts to encourage participants confidence in their writing skills.

via BMC Medical Education | Abstract | A feeling of flow: exploring junior scientists experiences with dictation of scientific articles.

MANUSCRIPT: Natural language processing: algorithms and tools to extract computable information from EHRs and from the biomedical literature

The increasing adoption of electronic health records EHRs and the corresponding interest in using these data for quality improvement and research have made it clear that the interpretation of narrative text contained in the records is a critical step. The biomedical literature is another important information source that can benefit from approaches requiring structuring of data contained in narrative text. For the first time, we dedicate an entire issue of JAMIA to biomedical natural language processing NLP, a topic that has been among the most cited in this journal for the past few years. We start with a description of a contest to select the best performing algorithms for detection of temporal relationships in clinical documents see page 806, followed by a general review of significance and brief description of commonly used methods to address this task see page 814.

via Natural language processing: algorithms and tools to extract computable information from EHRs and from the biomedical literature — Ohno-Machado et al. 20 5: 805 — Journal of the American Medical Informatics Association.

MANUSCRIPT: Asynchronous vs didactic education: it’s too early to throw in the towel on tradition

Background
Asynchronous, computer based instruction is cost effective, allows self-directed pacing and review, and addresses preferences of millennial learners. Current research suggests there is no significant difference in learning compared to traditional classroom instruction. Data are limited for novice learners in emergency medicine. The objective of this study was to compare asynchronous, computer-based instruction with traditional didactics for senior medical students during a week-long intensive course in acute care. We hypothesized both modalities would be equivalent.

Methods
This was a prospective observational quasi-experimental study of 4th year medical students who were novice learners with minimal prior exposure to curricular elements. We assessed baseline knowledge with an objective pre-test. The curriculum was delivered in either traditional lecture format (shock, acute abdomen, dyspnea, field trauma) or via asynchronous, computer-based modules (chest pain, EKG interpretation, pain management, trauma). An interactive review covering all topics was followed by a post-test. Knowledge retention was measured after 10 weeks. Pre and post-test items were written by a panel of medical educators and validated with a reference group of learners. Mean scores were analyzed using dependent t-test and attitudes were assessed by a 5-point Likert scale.

Results
44 of 48 students completed the protocol. Students initially acquired more knowledge from didactic education as demonstrated by mean gain scores (didactic: 28.39% +/- 18.06; asynchronous 9.93% +/- 23.22). Mean difference between didactic and asynchronous = 18.45% with 95% CI [10.40 to 26.50]; p = 0.0001. Retention testing demonstrated similar knowledge attrition: mean gain scores -14.94% (didactic); -17.61% (asynchronous), which was not significantly different: 2.68% +/- 20.85, 95% CI [-3.66 to 9.02], p = 0.399. The attitudinal survey revealed that 60.4% of students believed the asynchronous modules were educational and 95.8% enjoyed the flexibility of the method. 39.6% of students preferred asynchronous education for required didactics; 37.5% were neutral; 23% preferred traditional lectures.

Conclusions
Asynchronous, computer-based instruction was not equivalent to traditional didactics for novice learners of acute care topics. Interactive, standard didactic education was valuable. Retention rates were similar between instructional methods. Students had mixed attitudes toward asynchronous learning but enjoyed the flexibility. We urge caution in trading in traditional didactic lectures in favor of asynchronous education for novice learners in acute care.

via BMC Medical Education | Abstract | Asynchronous vs didactic education: it’s too early to throw in the towel on tradition.

ABSTRACT: Effective leadership – The way to excellence in health professions education

The current times are witnessing an explosion of new knowledge in medicine. The demographic profile, geographic distribution of many diseases is changing, there have been dramatic shifts in the health care delivery, healthcare professionals are more socially and professionally accountable, patients have become more consumerist in their attitude. These factors coupled with the increasing demand for trained health care professionals has led to, firstly, a rapid increase in the health professionals education institutions and secondly curricular changes and adoption of newer teaching learning methodologies, to equip the graduates with the desirable outcomes. The scene in health professions education is one characterized by rapid activity and change. A time which demands effective leadership at these institutions for achieving excellence. Drawing from a decade long experience, at different medical schools in the gulf region, the author opines that it is effective leadership, as observed at the institutions where he worked, which is responsible for realization of institutional vision, rapid development and achievement of excellence.

via Effective leadership – The way to excellence in he… [Med Teach. 2013] – PubMed – NCBI.

RESOURCE: Bullet points don’t work

At last, we have some scientifically rigorous evidence to show that slides full of bullet-points don’t work.

The research is the work of Chris Atherton, a cognitive psychologist. Chris recently delivered a presentation at the Technical Communication UK Conference and has put up her slides on slideshare. There’s been a tremendous amount of interest in them, but as they were designed to complement Chris’s talk – they only tell half the story.

In this post I’ll explain the findings of Chris’s research. I’ve written the post based on Chris’s slides and asked Chris to comment on various aspects. Chris has also reviewed this post to make sure I’ve got all the science right.

via Bullet points don’t work.

ABSTRACT: Limitations of poster presentations reporting educational innovations at a major international medical education conference

Background: In most areas of medical research, the label of ‘quality’ is associated with well-accepted standards. Whilst its interpretation in the field of medical education is contentious, there is agreement on the key elements required when reporting novel teaching strategies. We set out to assess if these features had been fulfilled by poster presentations at a major international medical education conference. Methods: Such posters were analysed in four key areas: reporting of theoretical underpinning, explanation of instructional design methods, descriptions of the resources needed for introduction, and the offering of materials to support dissemination. Results: Three hundred and twelve posters were reviewed with 170 suitable for analysis. Forty-one percent described their methods of instruction or innovation design. Thirty-three percent gave details of equipment, and 29% of studies described resources that may be required for delivering such an intervention. Further resources to support dissemination of their innovation were offered by 36%. Twenty-three percent described the theoretical underpinning or conceptual frameworks upon which their work was based. Conclusions: These findings suggest that posters presenting educational innovation are currently limited in what they offer to educators. Presenters should seek to enhance their reporting of these crucial aspects by employing existing published guidance, and organising committees may wish to consider explicitly requesting such information at the time of initial submission.

via Limitations of poster presentations reporting educational innovations at a major international medical education conference | Gordon | Medical Education Online.

ABSTRACT: Pediatric collaborative improvement networks: background and overview.

Multiple gaps exist in health care quality and outcomes for children, who receive <50% of recommended care. The American Board of Pediatrics has worked to develop an improvement network model for pediatric subspecialties as the optimal means to improve child health outcomes and to allow subspecialists to meet the performance in practice component of Maintenance of Certification requirements. By using successful subspecialty initiatives as exemplars, and features of the Institute for Healthcare Improvement’s Breakthrough Series model, currently 9 of 14 pediatric subspecialties have implemented collaborative network improvement efforts. Key components include a common aim to improve care; national multicenter prospective collaborative improvement efforts; reducing unnecessary variation by identifying, adopting, and testing best practices; use of shared, valid, high-quality real-time data; infrastructure support to apply improvement science; and public sharing of outcomes. As a key distinguisher from time-limited collaboratives, ongoing pediatric collaborative improvement networks begin with a plan to persist until aims are achieved and improvement is sustained. Additional evidence from within and external to health care has accrued to support the model since its proposal in 2002, including the Institute of Medicine’s vision for a Learning Healthcare System. Required network infrastructure systems and capabilities have been delineated and can be used to accelerate the spread of the model. Pediatric collaborative improvement networks can serve to close the quality gap, engage patients and caregivers in shared learning, and act as laboratories for accelerated translation of research into practice and new knowledge discovery, resulting in improved care and outcomes for children.

via Pediatric collaborative improvement networks: bac… [Pediatrics. 2013] – PubMed – NCBI.

ABSTRACT: Collaborative networks for both improvement and research

Moving significant therapeutic discoveries beyond early biomedical translation or T1 science and into practice involves: (1) T2 science, identifying “the right treatment for the right patient in the right way at the right time” (eg, patient-centered outcomes research) and tools to implement this knowledge (eg, guidelines, registries); and (2) T3 studies addressing how to achieve health care delivery change. Collaborative improvement networks can serve as large-scale, health system laboratories to engage clinicians, researchers, patients, and parents in testing approaches to translate research into practice. Improvement networks are of particular importance for pediatric T2 and T3 research, as evidence to establish safety and efficacy of therapeutic interventions in children is often lacking. Networks for improvement and research are also consistent with the Institute of Medicine’s Learning Healthcare Systems model in which learning networks provide a system for improving care and outcomes and generate new knowledge in near real-time. Creation of total population registries in collaborative network sites provides large, representative study samples with high-quality data that can be used to generate evidence and to inform clinical decision-making. Networks use collaboration, data, and quality-improvement methods to standardize practice. Therefore, variation in outcomes due to unreliable and unnecessary care delivery is reduced, increasing statistical power, and allowing a consistent baseline from which to test new strategies. In addition, collaborative networks for improvement and research offer the opportunity to not only make improvements but also to study improvements to determine which interventions and combination of strategies work best in what settings.

via Collaborative networks for both improvement and r… [Pediatrics. 2013] – PubMed – NCBI.