MENUCLOSE

 

Connect with us

Author: Brian S McGowan, PhD

MANUSCRIPT: The Impact of Social Media on Dissemination and Implementation of Clinical Practice Guidelines: A Longitudinal Observational Study.

BACKGROUND:
Evidence-based clinical practice guidelines (CPGs) are statements that provide recommendations to optimize patient care for a specific clinical problem or question. Merely reading a guideline rarely leads to implementation of recommendations. The American Academy of Neurology (AAN) has a formal process of guideline development and dissemination. The last few years have seen a burgeoning of social media such as Facebook, Twitter, and LinkedIn, and newer methods of dissemination such as podcasts and webinars. The role of these media in guideline dissemination has not been studied. Systematic evaluation of dissemination methods and comparison of the effectiveness of newer methods with traditional methods is not available. It is also not known whether specific dissemination methods may be more effectively targeted to specific audiences.
OBJECTIVE:
Our aim was to (1) develop an innovative dissemination strategy by adding social media-based dissemination methods to traditional methods for the AAN clinical practice guidelines “Complementary and alternative medicine in multiple sclerosis” (“CAM in MS”) and (2) evaluate whether the addition of social media outreach improves awareness of the CPG and knowledge of CPG recommendations, and affects implementation of those recommendations.
METHODS:
Outcomes were measured by four surveys in each of the two target populations: patients and physicians/clinicians (“physicians”). The primary outcome was the difference in participants’ intent to discuss use of complementary and alternative medicine (CAM) with their physicians or patients, respectively, after novel dissemination, as compared with that after traditional dissemination. Secondary outcomes were changes in awareness of the CPG, knowledge of CPG content, and behavior regarding CAM use in multiple sclerosis (MS).
RESULTS:
Response rates were 25.08% (622/2480) for physicians and 43.5% (348/800) for patients. Awareness of the CPG increased after traditional dissemination (absolute difference, 95% confidence interval: physicians 36%, 95% CI 25-46, and patients 10%, 95% CI 1-11) but did not increase further after novel dissemination (physicians 0%, 95% CI -11 to 11, and patients -4%, 95% CI -6 to 14). Intent to discuss CAM also increased after traditional dissemination but did not change after novel dissemination (traditional: physicians 12%, 95% CI 2-22, and patients 19%, 95% CI 3-33; novel: physicians 11%, 95% CI -1 to -21, and patients -8%, 95% CI -22 to 8). Knowledge of CPG recommendations and behavior regarding CAM use in MS did not change after either traditional dissemination or novel dissemination.
CONCLUSIONS:
Social media-based dissemination methods did not confer additional benefit over print-, email-, and Internet-based methods in increasing CPG awareness and changing intent in physicians or patients. Research on audience selection, message formatting, and message delivery is required to utilize Web 2.0 technologies optimally for dissemination.

via The Impact of Social Media on Dissemination and Implementation of Clinical Practice Guidelines: A Longitudinal Observational Study. – PubMed – NCBI.

ABSTRACT: Unveiling the Mobile Learning Paradox.

A mobile learning paradox exists in Australian healthcare settings. Although it is increasingly acknowledged that timely, easy, and convenient access to health information using mobile learning technologies can enhance care and improve patient outcomes, currently there is an inability for nurses to access information at the point of care. Rapid growth in the use of mobile technology has created challenges for learning and teaching in the workplace. Easy access to educational resources via mobile devices challenges traditional strategies of knowledge and skill acquisition. Redesign of learning and teaching in the undergraduate curriculum and the development of policies to support the use of mobile learning at point of care is overdue. This study explored mobile learning opportunities used by clinical supervisors in tertiary and community-based facilities in two Australian States. Individual, organisation and systems level governance were sub-themes of professionalism that emerged as the main theme and impacts on learning and teaching in situ in healthcare environments. It is imperative healthcare work redesign includes learning and teaching that supports professional identity formation of students during work integrated learning.

via Unveiling the Mobile Learning Paradox. – PubMed – NCBI.

ABSTRACT: A mixed-methods study of research dissemination across practice-based research networks.

Practice-based research networks may be expanding beyond research into rapid learning systems. This mixed-methods study uses Agency for Healthcare Research and Quality registry data to identify networks currently engaged in dissemination of research findings and to select a sample to participate in qualitative semistructured interviews. An adapted Diffusion of Innovations framework was used to organize concepts by characteristics of networks, dissemination activities, and mechanisms for rapid learning. Six regional networks provided detailed information about dissemination strategies, organizational context, role of practice-based research network, member involvement, and practice incentives. Strategies compatible with current practices and learning innovations that generate observable improvements may increase effectiveness of rapid learning approaches.

via A mixed-methods study of research dissemination across practice-based research networks. – PubMed – NCBI.

ABSTRACT: Learning and Collective Knowledge Construction With Social Media: A Process-Oriented Perspective.

Social media are increasingly being used for educational purposes. The first part of this article briefly reviews literature that reports on educational applications of social media tools. The second part discusses theories that may provide a basis for analyzing the processes that are relevant for individual learning and collective knowledge construction. We argue that a systems-theoretical constructivist approach is appropriate to examine the processes of educational social media use, namely, self-organization, the internalization of information, the externalization of knowledge, and the interplay of externalization and internalization providing the basis of a co-evolution of cognitive and social systems. In the third part we present research findings that illustrate and support this systems-theoretical framework. Concluding, we discuss the implications for educational design and for future research on learning and collective knowledge construction with social media.

via Learning and Collective Knowledge Construction With Social Media: A Process-Oriented Perspective. – PubMed – NCBI.

Comparative Effectiveness of eLearning

We are thrilled to announce the results of a head-to-head
trial, conducted in partnership with PostgraduateDyslipidemia Screenshot
Institute for Medicine (PIM) and AcademicCME, that measured changes in knowledge and competence. Designed in conjunction with a recently launched educational series for healthcare professionals managing patients with dyslipidemia, the results demonstrate that clinician learners participating in more interactive online education powered by the ArcheViewer have much greater improvements in knowledge and competence than those participating in traditional online learning.

The trial consisted of three, 30-minute, video-based activities that are accredited for physicians, nurses, and pharmacists.* To support the comparative study design, the same video-based content that was delivered within the ArcheViewer (intervention group) was also hosted on a separate educational portal (control group) and delivered to a similar audience of clinician learners. Each of the control group activities included the same CE front matter, pre-test, video file, and post-test questions that were utilized in the intervention group.

To assess the impact of the educational experience a single-sample analysis was conducted on the pre-test and first post-test performance of 711 learners. The percentage of correct responses for each group was compared to understand the change in correctness before and immediately after learners completed each activity. These data were then compared at a composite level across the three individual activities and at the level of each activity separately.

4x figure blue and orange

This comparative analysis revealed that clinician learners who participated in the ArcheViewer-powered education demonstrated changes in knowledge and competence that were nearly FOUR-times greater than that of the control group who participated in a traditional online series utilizing the same primary video content.

While the composite analysis demonstrated nearly four-times greater change in learners participating in the ArcheViewer-powered activities, the effect at the level of each activity ranged from 2.3 times to 5.7 times greater changes between pre and post-test correctness.

Stated another way, the ArcheViewer’s more interactive and engaging online experience appears to reduce the cognitive burden of learning placed on each learner and facilitates the movement of newly consumed information from short-term memory into long-term memory. The results of this comparative analysis suggest that our unique approach to structuring online learning may effectively overcome the key limitations of working memory.

We have been privileged to work closely with terrific educational partners, such as AcademicCME and PIM, who have now utilized the ArcheViewer to support lifelong learning for more than ten thousand clinicians, validating the strength and impact of the ArcheViewer and the ArcheMedX Learning Actions Model across hundreds of activities and dozens of clinical areas.

White Paper Thumbnail

For additional data and a deeper analysis of the trial results, please download this complimentary white paper.

 

If you are interested in seeing how the ArcheViewer transforms online learning, please schedule a 30-minute web demonstration with the ArcheMedX team using the following link:

Schedule a Web Demo

 

*This online education series and comparative research project was supported by an independent educational grant from Sanofi US and Regeneron Pharmaceuticals Alliance.

 

Continue Reading

New Publication: “The Rise and Stall of eLearning: Best Practices for Technology-Supported Education”

The following editorial was recently published at J Contin Educ Nurs. 2015 Jul;46(7):292-4. Please contact me directly if you are interested in obtaining a pdf of the article.

The Rise and Stall of eLearning: Best Practices for Technology-Supported Education

“eLearning” is a commonly used term in education today, but what does it mean? This column explores issues that educators need to be aware of in planning how technology-based education is most effectively delivered.

A Common Understanding

It is believed that the origins of the terms “eLearning” and “online learning” date back to the 1980’s. Over the decade that followed, the opportunities presented by this new ‘type’ of instruction were widely lauded, as noted in this example from 1998:

“…nearly 50% of higher education institutions currently engage in some type of online learning. Academic and professional organizations agree that using web-based learning environments can offer sound pedagogical benefits….‘ The web is revolutionizing some areas of study through increased opportunities for learning and alternative formats for information.” 1

However, even in recent years the study of these now ubiquitous types of instruction has been hampered by often conflicting understanding of what is actually being explored. For instance, the terms “elearning”, “online learning” or “web-based learning”, and “distance learning” have been widely used by both educators and researchers. For our purposes, we will simplify the terminology in keeping with the meta-analysis recently published by Moore, Dickson-Deane, and Galyen2:

  • Distance Learning = simply describes the ability for an educational intervention to overcome geographic constraints.
  • Online or Web-based Learning = principally describes education that is accessed through use of the internet with the opportunity for connectivity and enhanced flexibility in design. It is seen as a newer and improved form of distance learning.
  • eLearning (or e-Learning) = while the term “eLearning” originally focused on computer-based training (CBT)3, it now is understood to more broadly describe education available through any technology with enhanced opportunity for connectivity and flexibility in design while overcoming constraints of both time and space.

Importantly, as we simplify the terminology, this is perhaps our first critical insight –the general definitions of “distance learning”, “online learning”, and “eLearning”, while perhaps intuitive to some, fail to adequately describe the nature of the underlying instructional design and/or educational interventions. Instead, they provide a more general description of the experience or opportunity. It is therefore critical for educational planners to focus more on the specific design of their education and not rely on catch-all or umbrella terminology to describe the learning experience.

A Promise Unfulfilled

Given this initial insight, we might still accept that the basic promise of technology-supported education seems well validated – with technology, content could be accessed by more learners, in more convenient ways, largely independent of geography, and asynchronously (if so desired). It should be pointed out that the generally established benefits of technology-supported education fall short of proving impact on attitudes, learning, or behavior change. These cognitive and psychomotor benefits are largely dependent on the specific interventions being delivered through educational technology.

The simple truth is that technology-supported education can be used to distribute and deliver really good education OR really mundane education – using computers and the internet is not an educational silver bullet, but it could be a remarkable educational tool. For example:

Tens of thousands of enduring ‘webcasts’ are produced each year. These activities, typically comprised of simple, video-based content, can significantly impact the reach of content but have not been shown to have huge impact on engagement, completion rates, or learning.4, 5 Takeaway: The convenience and expanded reach of viewing videos online does not automatically equal better learning.

More recently Massive, Open, Online, Courses or MOOCs have emerged as a means of providing nearly universal access to educational content. These courses are usually comprised of a curriculum or connected series of brief enduring webcasts presented over time. While we continue to learn more about the model, it is generally acknowledged that MOOCs have largely failed to live up to their substantial hype – the dissemination hasn’t been as wide as intended, the audiences have remained more homogeneous than desired, and attrition rates continue to underwhelm.6 Takeaway:  Learners still need more structure and more motivation to make the commitment to learn.

Even more recently the “Flipped Classroom” has emerged as a blended model of online activities serving to prepare learners for a live event that can then focus more on application than knowledge transfer. While my colleagues and I have recently had tremendous success with this model utilizing a more data-centric approach, other faculty and instructors have struggled to derive benefits that match the additional level of planning and resources that are needed. 7, 8 Takeaway: How the pieces are blended and how data is leveraged is much more important than simply fronting a meeting with extra assignments and work.

The need to critically evaluate these emerging models in technology-supported education is not insignificant – over the past 10 years alone more than 68 million clinicians have participated in webcasts as an element of their professional development.9 Clearly, educators could have great impact if we could optimize each approach and fully realize the potential of the technology.

Uncovering the Missing Link

The good news is that each of the examples above provides a critical lesson for educators. In short, the ultimate value of technology-supported education is not inherent in technology per se, but how that technology and the data it generates is leveraged. From the more traditional webcast we can learn that information can be broadly disseminated, but also that technology provides an opportunity to create engaging environments for active learning. These environments allow for new forms of learning data to be collected and explored  so the learning experience can be increasingly dynamic, refined, and personalized. From MOOCs we can learn that activities can be tied together and that understanding how learners navigate through a connected curriculum can provide critical insights into what topics or lessons are most interesting and/or most challenging. From the flipped classroom we can learn how data-driven and agile educational design can enhance blended and sequential learning experiences, enabling educators to better focus their face-to-face interventions based on the data and insights gleaned from the online components.

Summary

The generalized terminology applied to technology-supported education is often vague or conflicting. We must dive more deeply into the how technology and evidence is being used to drive learning. While technology may present an enhanced opportunity for connectivity and flexibility in design, it will not happen without planning and intention.  To derive the ultimate benefits from ‘eLearning’ that have been posited, the various technologies we choose must be leveraged intelligently to create opportunities to connect learners, to structure learning experiences, and to collect the critical forms of learning data that must become front and center in the educational planning process.

References

1 Blackboard Education Report. “Educational Benefits of Online Learning.” 1998. http://blackboardsupport.calpoly.edu/content/faculty/handouts/Ben_Online.pdf ; Accessed May 1, 2015.

2 Moore JL, Dickson-Deane C, and Galyen K. (2011). “e-Learning, online learning, and distance learning environments: Are they the same?” The Internet and Higher Education 14 (2): 129–135.

3 Cross J. “The DNA of eLearning.” http://www.internettime.com/Learning/articles/DNA.pdf ; Accessed May 1, 2015.

4 Williams JG. “Are online learning modules an effective way to deliver hand trauma management continuing medical education to emergency physicians?” Can J Plast Surg. 2014 Summer;22(2):75-8.

5 Mazzoleni MC, Maugeri C, Rognoni C, Cantoni A, and Imbriani M. “Is it worth investing in online continuous education for healthcare staff?” Stud Health Technol Inform. 2012;180:939-43.

6 EdTech Now. “MOOCs and other ed-tech bubbles.” http://edtechnow.net/2012/12/29/moocs-and-other-ed-tech-bubbles/;  Accessed May 1, 2015.

7 McGowan BS, Balmer JT, and Chappell K. “Flipping the Classroom: A Data-Driven Model for Nursing Education.” J Contin Educ Nurs. 2014;45(11):477–478.

8 Rees J. “The flipped classroom is decadent and depraved.” 2014. https://moreorlessbunk.wordpress.com/2014/05/05/the-flipped-classroom-is-decadent-and-depraved/; Accessed May 5th, 2015.

9 ACCME Annual Reports. http://www.accme.org/news-publications/publications/annual-report-data; Accessed May 15th, 2015.

Defining (real) learner engagement in online educational interventions

Over the weekend I came across an interesting article in the latest Alliance Almanac, “Defining Participants and Learners in CME: Standardizing Language for Online Activity Reporting.” Access to this article is limited to Alliance members, but in a nutshell, here is the take-away:

Showing the overall number of people who engaged in our
online CME activities, whether in whole or in part, and
evaluated and/or requested credit for them is important, but
it is only part of the story. Reporting the numbers of actual
target audience members who engaged with the content and
the clinical implications of their engagement is the remainder
of the story. In the outcomes measurement reports,
CME providers should include an explanation of the participation
or engagement metrics and/or process used to gather
them to make this apparent to grantors.

This is both a critical and necessary lesson – the number of ‘learners’ provides only a minimum understanding of impact – and to provide some practical support for educational planners,  the author presents a table of data elements that might begin to provide additional understanding. Here is what is shared:

Lichti - Defining Participants and Learners in CME

While this is a good start, it falls short of truly addressing what real learner engagement and impact looks like and what novel engagement analytics can tell us. From our Learning Actions Research and the implementation of the Learning Actions Model, we can provide an example of a much deeper and (in my opinion) a more valuable exploration of engagement.

When learners are encouraged to reflect and take action in increasingly efficient ways, we now empirically know that learners benefit from taking notes, setting reminders, conducting searches of related resources, viewing resources, downloading resources, sharing resources, asking faculty questions, responding to poll questions, reacting to assessment feedback, and beginning and replying to associated discussions – and the data show repeatedly that these actions are critical to improving learning and competency!

As one brief example, engagement in an online, flipped curriculum that utilizes the Learning Actions Model might be represented in the following way:

What does engagement look like - July 2015 blog post

Here is where a data-driven approach to learning provides a much richer understanding of learning AND engagement – from the figure above we learn that while participating in the 11-part curriculum, 121 learners took nearly 5,000 actions and averaged nearly 8 actions within each activity they began. 70% of these actions are comprised of what we refer to as the core learning actions including: 1,747 resource views, 1,043 resource downloads, nearly 200 notes taken, and nearly 500 reminders set. These are the purest measures of engagement as they are unprompted and self-directed…and behind these numbers are the qualitative data (real learner notes, searches, discussions…etc) that describe the actual cognitive actions that drive learning and knowledge change…and provide educators unique insight into the actual engagement and impact of their education!

Keep these lessons in mind: as the author of the recent Almanac article suggests, understanding of impact of educational interventions based solely on audience reach and traditional assessments (pre/post tests) is barely scratching the surface of what could and should be measured.

My take-away:  With continuing advancements in online learning, we have so many more opportunities to leverage new forms of learning and engagement data than ever before. Yet, we must open our minds to exploring the self-directed actions learners take to support their own learning and retention – only then will we be able to peer into the cognitive underpinnings of learning and behavior change. In short, our community could learn much more about the impact of our education if we strive to provide learners with better tools for learning…and then watch and study how learners actually engage.

ABSTRACT: Variables that affect the process and outcome of feedback, relevant for medical training: a meta-review.

CONTEXT:
Feedback is considered important in medical education. The literature is not clear about the mechanisms that contribute to its effects, which are often small to moderate and at times contradictory. A variety of variables seem to influence the impact of feedback on learning. The aim of this study was to determine which variables influence the process and outcomes of feedback in settings relevant to medical education.
METHODS:
A myriad of studies on feedback have been conducted. To determine the most researched variables, we limited our review to meta-analyses and literature reviews published in the period from January 1986 to February 2012. According to our protocol, we first identified features of the feedback process that influence its effects and subsequently variables that influence these features. We used a chronological model of the feedback process to categorise all variables found.
RESULTS:
A systematic search of ERIC, PsycINFO and MEDLINE yielded 1101 publications, which we reduced to 203, rejecting papers on six exclusion criteria. Of these, 46 met the inclusion criteria. In our four-phase model, we identified 33 variables linked to task performance (e.g. task complexity, task nature) and feedback reception (e.g. self-esteem, goal-setting behaviour) by trainees, and to observation (e.g. focus, intensity) and feedback provision (e.g. form, content) by supervisors that influence the subsequent effects of the feedback process. Variables from all phases influence the feedback process and effects, but variables that influence the quality of the observation and rating of the performance dominate the literature. There is a paucity of studies addressing other, seemingly relevant variables.
CONCLUSIONS:
The larger picture of variables that influence the process and outcome of feedback, relevant for medical education, shows many open spaces. We suggest that targeted studies be carried out to expand our knowledge of these important aspects of feedback in medical education

via Variables that affect the process and outcome of feedback, relevant for medical training: a meta-review. – PubMed – NCBI.

ABSTRACT: “Teaching is like nightshifts …”: a focus group study on the teaching motivations of clinicians.

BACKGROUND:To ensure the highest quality of education, medical schools have to be aware of factors that influence the motivation of teachers to perform their educational tasks. Although several studies have investigated motivations for teaching among community-based practitioners, there is little data available for hospital-based physicians.PURPOSES:This study aimed to identify factors influencing hospital-based physicians’ motivations to teach.METHODS:We conducted 3 focus group discussions with 15 clinical teachers from the Medical Faculty at Hamburg University. Using a qualitative inductive approach, we extracted motivation-related factors from the transcripts of the audio-recorded discussions.RESULTS:Three main multifaceted categories influencing the motivation of teachers were identified: the teachers themselves, the students, and the medical faculty as an organization. Participants showed individual sets of values and beliefs about their roles as teachers as well as personal notions of what comprises a “good” medical education. Their personal motives to teach comprised a range of factors from intrinsic, such as the joy of teaching itself, to more extrinsic motives, such as the perception of teaching as an occupational duty. Teachers were also influenced by the perceived values and beliefs of their students, as well as their perceived discipline and motivation. The curriculum organization and aspects of leadership, human resource development, and the evaluation system proved to be relevant factors as well, whereas extrinsic incentives had no reported impact.CONCLUSIONS:Individual values, beliefs, and personal motives constitute the mental framework upon which teachers perceive and assess motivational aspects for their teaching. The interaction between these personal dispositions and faculty-specific organizational structures can significantly impair or enhance the motivation of teachers and should therefore be accounted for in program and faculty development.

via “Teaching is like nightshifts …”: a focus group study on the teaching motivations of clinicians. – PubMed – NCBI.

ABSTRACT: Improving Learning Efficiency of Factual Knowledge in Medical Education

OBJECTIVE:
The purpose of this review is to synthesize recent literature relating to factual knowledge acquisition and retention and to explore its applications to medical education.
RESULTS:
Distributing, or spacing, practice is superior to massed practice (i.e. cramming). Testing, compared to re-study, produces better learning and knowledge retention, especially if tested as retrieval format (short answer) rather than recognition format (multiple choice). Feedback is important to solidify the testing effect.
CONCLUSIONS:
Learning basic factual knowledge is often overlooked and under-appreciated in medical education. Implications for applying these concepts to smartphones are discussed; smartphones are owned by the majority of medical trainees and can be used to deploy evidence-based educational methods to greatly enhance learning of factual knowledge.

via Improving Learning Efficiency of Factual Knowledge in Medical Education. – PubMed – NCBI.