Patient Engagement Measure
A Patient Engagement Measure is a person measure for a patient performing some measurable medical behavior.
- Context:
- It can range from being a Digital Patient Engagement Measure to being ...
- It can range from being a Clinical Treatment Patient Engagement Measure to being a Clinical Study Participant Engagement Measure to being ...
- Example(s):
- Patient-Performed Clinical Task Adherence.
- Duration of time spent on intervention (Beevers C et al. J Consult Clin Psychol.2017;85:367-380).
- Percentage randomized who finished the course (Andrews G et al. J Anxiety Disord. 2018;55:7078).
- Number of completed activities (Donkin L et al. J Med Internet Res. 2011;13(3):e52).
- Percentage treatment completion (Donkin L et al. J Med Internet Res. 2011;13(3):e52).
- Extent to which individuals experience the content of the Internet intervention. (Christensen H et al. J Med Internet Res.2009;1:e13).
- a VA Metrics for Digital Engagement:
- I have all the information I need to manage my health and health care.
- I am confident in working with my VA health care team to manage my health and health care.
- I feel in control of my health and health care (such as taking part in decisions or following through on any medication, treatment, or health routine).
- I am able to achieve my long-term health and health care goals (such as being self-reliant, living longer and better, or knowing that my family and friends can depend on me).
- …
- See: ....
References
2020a
- (Chien et al., 2020) ⇒ Isabel Chien, Angel Enrique, Jorge Palacios, Tim Regan, Dessie Keegan, David Carter, Sebastian Tschiatschek, Aditya Nori, Anja Thieme, Derek Richards, Gavin Dohert, and Danielle Belgrave. (2020). “A Machine Learning Approach to Understanding Patterns of Engagement with Internet-delivered Mental Health Interventions.” In: JAMA Network Open, 3(7).
2020b
- Ingersgaard MV, Helms Andersen T, Norgaard O, Grabowski D, Olesen K. Reasons for nonadherence to statins - a systematic review of reviews. Patient Prefer Adherence. 2020;14:675-691.
2019
- Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res. 2019;21(9):e14567-e14567.
2018a
- (Short et al., 2018) ⇒ Camille E. Short, Ann DeSmet, Catherine Woods, Susan L. Williams, Carol Maher, Anouk Middelweerd, Andre Matthias Müller et al. (2018). “Measuring Engagement in EHealth and MHealth Behavior Change Interventions: Viewpoint of Methodologies.” Journal of medical Internet research 20, no. 11
- QUOTE: ... However, to test the hypotheses generated by the conceptual modules, we need to know how to measure engagement in a valid and reliable way. The aim of this viewpoint is to provide an overview of engagement measurement options that can be employed in eHealth and mHealth behavior change intervention evaluations, discuss methodological considerations, and provide direction for future research. To identify measures, we used snowball sampling, starting from systematic reviews of engagement research as well as those utilized in studies known to the authors. A wide range of methods to measure engagement were identified, including qualitative measures, self-report questionnaires, ecological momentary assessments, system usage data, sensor data, social media data, and psychophysiological measures. ...
2018b
- (Simon & Tagliabue, 2018) ⇒ Carsta Simon, and Marco Tagliabue. (2018). “Feeding the Behavioral Revolution: Contributions of Behavior Analysis to Nudging and Vice Versa.” Journal of Behavioral Economics for Policy 2, no. 1
2017
- (Perski et al., 2017) ⇒ Olga Perski, Ann Blandford, Robert West, and Susan Michie. (2017). “Conceptualising Engagement with Digital Behaviour Change Interventions: A Systematic Review Using Principles from Critical Interpretive Synthesis.” Translational behavioral medicine 7, no. 2
- QUOTE: ... “Engagement” with digital behaviour change interventions (DBCIs) is considered important for their effectiveness. Evaluating engagement is therefore a priority; however, a shared understanding of how to usefully conceptualise engagement is lacking. ...
... Engagement has traditionally been conceptualised differently across the behavioural science, computer science and HCI literatures, which might be due to the different epistemologies subscribed to, the differing research contexts and the different objectives pursued. In the computer science and HCI literatures, engagement has traditionally been conceptualised as the subjective experience of flow, a mental state characterised by focused attention and enjoyment [18]. This kind of conceptualisation might have emerged as a result of the focus on entertainment and usability of interactive technology. In the behavioural science literature, engagement has typically been conceptualised as “usage” of DBCIs, focusing on the temporal patterns (e.g. frequency, duration) and depth (e.g. use of specific intervention content) of usage [19, 20]. This kind of conceptualisation has emerged due to the observation that while many download and try DBCIs, sustained usage is typically low [21–24]. Henceforth, two working definitions of engagement as used in the computer science and HCI literatures (“engagement as flow”) and the behavioural science literature (“engagement as usage”) are used to scope the space within which this review is conducted. ...
... The following two synthetic constructs were developed: “engagement as subjective experience” and “engagement as behaviour”. ...
... The majority of articles reviewed from the behavioural science literature conceptualised engagement in behavioural terms, suggesting that it is identical to the usage of a DBCI or its components. Engagement has further been described as the extent of usage over time [19, 52], sometimes referred to as the “dose” obtained by participants or “adherence” to an intervention [25, 53, 54], determined by assessing the following subdimensions: “amount” or “breadth” (i.e. the total length of each intervention contact), “duration” (i.e. the period of time over which participants are exposed to an intervention), “frequency” (i.e. how often contact is made with the intervention over a specified period of time) and “depth” (i.e. variety of content used) [20, 53]. In the computer science and HCI literatures, engagement has been conceptualised as the degree of involvement over a longer period of time [55], sometimes referred to as “stickiness” [56]. A distinction has also been made between “active” and “passive” engagement; while the former involves contributing to the intervention through posting in an online discussion forum, the latter involves reading what others have written without commenting, also known as “lurking” [57]. Engagement has also been conceptualised as a process of linked behaviours, suggesting that users move dynamically between stages of engagement, disengagement and re-engagement [28]. As conceptual overlap was observed between these definitions, the authors propose that engagement involves different levels of usage over time. ...
- QUOTE: ... “Engagement” with digital behaviour change interventions (DBCIs) is considered important for their effectiveness. Evaluating engagement is therefore a priority; however, a shared understanding of how to usefully conceptualise engagement is lacking. ...
2014
- Bower P, Brueton V, Gamble C, et al. Interventions to improve recruitment and retention in clinical trials: a survey and workshop to assess current practice and future priorities. Trials. 2014;15:399.
2012
- (Kelders et al., 2012) ⇒ Saskia M. Kelders, Robin N. Kok, Hans C. Ossebaard, and Julia EWC Van Gemert-Pijnen. (2012). “Persuasive System Design Does Matter: A Systematic Review of Adherence to Web-based Interventions.” Journal of medical Internet research 14, no. 6
- QUOTE: ... A percentage of adherence was calculated to enable us to compare the different interventions. We did this by calculating the percentage of participants that adhered to the intervention. For example, when the intended use of an intervention was “complete 8 modules” and 60 out of 100 participants completed 8 modules, the adherence was 60%. For each intervention that was included, we calculated one overall adherence percentage. When more studies about the same intervention yielded different adherence percentages, we calculated the overall adherence percentage using a weighted average, based on the number of participants in each study. Furthermore, when the study included a waiting list and the respondents in this waiting list received access to the intervention at a later stage, the adherence was calculated based on usage data for all participants, including the waiting list group. ...
2012b
- (Fenerty et al., 2012) ⇒ Sarah D. Fenerty, Cameron West, Scott A. Davis, Sebastian G. Kaplan, and Steven R. Feldman. (2012). “The Effect of Reminder Systems on Patients’ Adherence to Treatment.” Patient preference and adherence 6
2005
- (Eysenbach, 2005) ⇒ Gunther Eysenbach. (2005). “The Law of Attrition.” In: Journal of medical Internet research, 7(1).