Digital Biomarkers

Tools and Devices - Review Article

Open Access Gateway

Considerations for Conducting Bring Your Own “Device” (BYOD) Clinical Studies

Demanuele C.a · Lokker C.b · Jhaveri K.c · Georgiev P.a · Sezgin E.d · Geoghegan C.e · Zou K.H.f · Izmailova E.g · McCarthy M.h

Author affiliations

aPfizer Inc, Cambridge, MA, USA
bMcMaster University, Hamilton, ON, Canada
cPhilips Sleep and Respiratory Care, Monroeville, PA, USA
dThe Abigail Wexner Research Institute, Nationwide Children’s Hospital, Columbus, OH, USA
ePatient and Partners LLC, Madison, CT, USA
fGlobal Medical Analytics and Real-World Evidence, Viatris Inc, Canonsburg, PA, USA
gKoneksa Health, New York, NY, USA
hNovartis Ireland Ltd., Dublin, Ireland

Corresponding Author

Marie McCarthy, marie-1.mccarthy@novartis.com

Related Articles for ""

Digit Biomark 2022;6:47–60

Abstract

Background: Digital health technologies are attracting attention as novel tools for data collection in clinical research. They present alternative methods compared to in-clinic data collection, which often yields snapshots of the participants’ physiology, behavior, and function that may be prone to biases and artifacts, e.g., white coat hypertension, and not representative of the data in free-living conditions. Modern digital health technologies equipped with multi-modal sensors combine different data streams to derive comprehensive endpoints that are important to study participants and are clinically meaningful. Used for data collection in clinical trials, they can be deployed as provisioned products where technology is given at study start or in a bring your own “device” (BYOD) manner where participants use their technologies to generate study data. Summary: The BYOD option has the potential to be more user-friendly, allowing participants to use technologies that they are familiar with, ensuring better participant compliance, and potentially reducing the bias that comes with introducing new technologies. However, this approach presents different technical, operational, regulatory, and ethical challenges to study teams. For example, BYOD data can be more heterogeneous, and recruiting historically underrepresented populations with limited access to technology and the internet can be challenging. Despite the rapid increase in digital health technologies for clinical and healthcare research, BYOD use in clinical trials is limited, and regulatory guidance is still evolving. Key Messages: We offer considerations for academic researchers, drug developers, and patient advocacy organizations on the design and deployment of BYOD models in clinical research. These considerations address: (1) early identification and engagement with internal and external stakeholders; (2) study design including informed consent and recruitment strategies; (3) outcome, endpoint, and technology selection; (4) data management including compliance and data monitoring; (5) statistical considerations to meet regulatory requirements. We believe that this article acts as a primer, providing insights into study design and operational requirements to ensure the successful implementation of BYOD clinical studies.

© 2022 The Author(s). Published by S. Karger AG, Basel


Introduction

Digital health technologies have been defined as “a system that uses computing platforms, connectivity, software, and sensors for healthcare and related uses [1].” Using technologies to collect data created and recorded by participants, known as “person-generated health data,” has gained considerable interest and traction in clinical trials. These tools can collect patient data, enabling objective and frequent monitoring of physiology, behavior, and function compared to snapshot in-clinic assessments. In addition, the growth in digital health technology use in the general population has enabled the bring your own “device” (BYOD1) model. While this approach previously focused on using individuals’ smartphones to capture electronic patient-reported outcome data, there is a growing interest in expanding the BYOD model to include personal digital health technologies.

The digital health technologies with the greatest potential to be amenable to BYOD studies are fitness trackers and smartwatches [2]. With intuitive and easy-to-use interfaces, embedded multi-modal sensors can derive various physiological measures, including physical activity, sleep, and vital sign data (e.g., heart rate, heart rate variability, pulse oximetry) [3-6]. Smartphones have increasing utility as digital health technologies with inbuilt sensors and technology such as accelerometers, global positioning system sensors, microphones, cameras, gyroscopes, and magnetometers. These sensors are used as a source of data for passive monitoring and to deliver functional assessments to study participants via mobile applications [7, 8].

The benefits of BYOD studies are multifarious; the approach allows participants to use their own technologies, leading to better compliance [9, 10] and, potentially, less chance of introducing biases, such as the Hawthorne effect, from monitoring technologies [11]. For participants, there is a familiarity with and access to the technology they use in their daily lives [12, 13]. Sponsors can design patient-centric studies with lower costs and burden on study sites [12-14]. The BYOD model potentially expands participation in clinical trials for populations with limited access to clinical facilities, e.g., older adults, people with disabilities, or living in remote locations [15]. Conversely, limiting eligibility to those with good health and digital literacy, internet connection, and the latest technologies may bias results with data not representative of the target population [15]. BYOD models may not be appropriate in all circumstances; the technologies required to generate study endpoints may not be readily available to the study populations. A model requiring provisioned devices for subsets of participants who do not possess the required technology may be optimal [15, 16]. A comparison between BYOD and provisioned device options is presented in Table 1.

Table 1.

Comparison of the BYOD and provisioned technology options when designing a clinical study

/WebMaterial/ShowPic/1442456

The BYOD approach is feasible in non-interventional and interventional studies using observational, randomized controlled trials, pragmatic (practical) clinical trial designs, and real-world evidence studies (see Appendix 1, available online at www.karger.com/doi/10.1159/000525080, for study definitions, best practices, and checklists). Many observational and postmarketing research questions can be addressed with BYOD to collect data with minimal disruptions to daily life. For example, BYOD models have been successfully deployed in surveillance studies where sensor data from fitness trackers and smartwatches generate data on Flu and COVID-19 infections from 1.3 million [17], 200,000 [18], and 30,000 [19] individuals. These studies follow a typical BYOD approach where participants download a study-specific application to their smartphones to capture patient-reported outcomes, diagnostic test results, and data from connected technologies [13]. BYOD studies incorporating digital health technologies are beginning to emerge in interventional clinical trials [20-22]. Figure 1 showcases examples of different BYOD configurations utilized in clinical studies.

Fig. 1.

Examples of possible BYOD configurations: (A) smartphone acts as DHT using study app(s) deployed on the participant’s smartphone to collect a variety of data, including (i) electric patient-reported outcomes; (ii) diagnostic tests; (iii) active performance outcome assessments (PerfO) where participants are guided by the app and carry out physical assessments, e.g., a timed tapping assessment, walking task, or guided sit to stand test; (iii) passive data generated by the smartphone sensors without deliberate, intentional input from study participant, e.g., steps, global positioning system, weather, and voice sentiment. (B) Smartphone acts as a data ingestor/mobile hub, collecting data via study app(s) connected via Bluetooth or Wifi to one or more DHTs; (C) standalone DHTs, eSIM enabled, transmitting study data directly to the database. Adapted with permission from DIME [55].

/WebMaterial/ShowPic/1442448

Objective

This paper provides considerations for designing and deploying a BYOD model to capture data for clinical studies. These considerations address: (1) early identification and engagement with internal and external stakeholders; (2) study design including informed consent and recruitment strategies; (3) outcome, endpoint, and technology selection; (4) data management including compliance and data monitoring; and (5) statistical considerations to meet regulatory requirements (Fig. 2). This paper is framed using the Agency for Health Research and Quality guidelines for real-world evidence study design [23]. These guidelines have been broadly adopted and selected as an overarching guide to develop our approach to support sponsors and researchers in designing BYOD studies across diverse patient populations and therapeutic areas. Such considerations are intended for a broad audience, including academic research, drug developers, and patient advocacy organizations.

Fig. 2.

Framework for deploying a BYOD model in clinical studies.

/WebMaterial/ShowPic/1442446

Section 1: Early Identification and Engagement with Internal and External Stakeholders

Understanding the needs, concerns, and impact of the BYOD approach on stakeholders can improve the quality and efficiency of the study. Therefore, the proposed steps in the overarching approach are as follows; (1) stakeholder mapping to identify both internal and external stakeholders, (2) stakeholder engagement, a bidirectional interaction to gain understanding, and (3) stakeholder management to facilitate the smooth operationalization of BYOD in the clinical trial.

Internal Stakeholders

A cross-functional team approach is required, including, but not limited to, representatives from the following groups: data management, medical affairs, biostatistics, data science and data engineering, clinical operations, regulatory affairs and safety (Table 2). Consultation is essential to understand the potential impact of BYOD on the different work-streams [24].

Table 2.

Internal stakeholders to engage when developing a BYOD study and considerations from their perspectives

/WebMaterial/ShowPic/1442454

External Stakeholders

Early engagement with participants, caregivers, vendors, site personnel, and other external stakeholders (Table 3) will maximize value and minimize the burden during study development. These are novel study designs; therefore, clear communication, education, support, and training are essential for a successful study to ensure that the sites and investigators are appropriately trained in the study objectives and the considered technology. Participant materials need to be informative, addressing any participants’ concerns with respect to data privacy, mobile data costs, and the consenting process. Gauging the digital literacy of potential study participants during these initial activities will allow for planning and developing appropriate training supports (e.g., videos, a support line) to mitigate differences among participants in how they engage with the technology. Developing a partnership approach with the technology vendor is important so technical issues and mitigation strategies can be jointly developed.

Table 3.

External stakeholders to engage when developing a BYOD study and considerations from their perspectives

/WebMaterial/ShowPic/1442452

Section 2: Study Design Considerations

Study Objectives

The first set of questions are: Are digital health technologies appropriate to address the study objectives and research questions? Does a BYOD study design minimally provide equivalent data to traditional approaches, and add value, providing insights not derived via standard approaches?

BYOD Specifications

Study teams creating the evidence dossier to support specific digital health technology and digital endpoints should consider the data context: data generated by participants’ technology contrasts with a provisioned approach where the team has complete control of the technology. Study applications should capture information about the data source, i.e., technology specifications (e.g., model, version, manufacturing date), to inform the data analysis and interpretation. Study teams must define the minimum technological requirements necessary to generate the required digital endpoints, such as the operating system, the model, firmware, and data storage requirements. Once minimum requirements are established, study teams may further restrict the technology type for a more targeted study or proceed with more flexibility and specify analysis sets [16].

Developing Eligibility Criteria

In any protocol, the eligibility criteria define the population of interest [23]. When establishing eligibility criteria for BYOD studies, the potential limitation of BYOD specifications needs to be considered [19].

Eligibility criteria considerations affect the generalizability of the results, may bias the collected sample, and impact inclusion and diversity in clinical studies. Ownership of a specific technology or internet connection, as part of the eligibility criteria, may bias the study cohort towards individuals of higher socioeconomic strata and deter the participation of those with limited access to technology and the internet [25]. Participants may not feel comfortable with the sponsor accessing study data from their technology where their personal information is stored [26]. Technology should never limit study participation. Mitigation strategies that provide these participants with stand-alone technology or internet service promote broader inclusivity [15]. The Critical Path Institute’s electronic patient-reported outcome Consortium recommends provisioning devices for participants who do not own the technology required for the study [27]. This helps address gaps due to low socioeconomic status.

Recruitment

BYOD studies could potentially limit participation to those who own digital health technologies; therefore, ensuring that the study sample represents the population of interest is essential. A recruitment plan, even for BYOD studies heavily reliant on technology, should incorporate a variety of methods such as broadcast and clinic advertisements and targeted outreach. Recruiting solely through social media can exclude participants with no or limited access to the internet, such as those living in rural areas and with low socioeconomic status [28, 29]. Participants may own the required technology to enroll in the study (e.g., a smartphone) and may be willing to participate but lack other resources such as reliable internet connection and research awareness. Such practices nurture the digital divide and limit the ability to understand the needs of historically underrepresented communities.

Ethics and Informed Consent

The consent process in BYOD studies includes training that may require face-to-face interaction to explain the study’s aim, scope, and risks [30]. In studies deemed as “low risk” (e.g., observational, non-interventional studies collecting nonidentifiable information), online consent could be used [31], depending on local regulations. Procedures must ensure participant and technology user verification (e.g., identity and meeting eligibility criteria), mitigating data privacy and security risks [32]. The process for withdrawing consent and disengaging their technology from the study needs to be straightforward. Study software installation also needs to be simple, responsive, and intuitive. Technologies must comply with local regulations, e.g., the Health Insurance Portability and Accountability Act or the General Data Protection Regulation. Local institutional review boards should be consulted and study protocols approved as necessary [30].

Communications need to be tailored for health and technical literacy. Consent materials should be explicit and appropriate so participants can understand their responsibilities, the study data collected by their technology, the method and duration of data storage, the data usage, privacy and sharing, and how it aligns with the research goals. Participants need to be aware of the importance of not changing their technology during the study without alerting the study team and have information related to technical support should their technology malfunction or have issues with the study application.

Section 3: Outcome, Endpoint, and Technology Selection

Selection of Appropriate Outcomes

Despite the convenience and potential cost savings of a BYOD model, the same approaches used for outcome selection in traditional clinical trials should be applied [23]. Sponsors should select measures that are meaningful to the population being studied [33-35]. The United States Food and Drug Administration (FDA) recognizes that the clinical meaningfulness of digital outcomes can impact regulatory decision-making [36]. It is essential to consider the multiple stakeholders (as in Tables 2, 3), study scope and objectives, and the intended application of the outcomes [23].

Digital Endpoints

Endpoints need to be reliable and accurate, measure the treatment effects, validated against an appropriate reference standard, and assess the population under consideration [37-40]. A fundamental challenge of BYOD models is ensuring the equivalence of digital endpoints captured or derived from different technologies. As discussed below, study teams should rely on published literature or conduct methodology studies in the population of interest to establish differences in endpoints derived from data collected by the different technologies under consideration. Estimating measurement errors and data reliability should also be addressed in the analysis (Table 4).

Table 4.

Key considerations for analyzing data collected by BYOD models

/WebMaterial/ShowPic/1442450

Technology Selection

Teams should consider the proportion of study participants who may have access to the proposed technology and develop plans to provision technologies to those who have not. The selected technology needs to be “fit for purpose” [41]. Digital health technologies can be used to collect digital measures directly, such as heart rate, [42] and to derive novel endpoints such as resting tremor [43] or scratch and sleep quantification [44]. The selection process is impacted by the sampled data quality, such as the signal-to-noise ratio that impacts the downstream derivation of the endpoints [45], the battery life, which can affect compliance, and the ability to deploy the study app.

Digital health technology selection must consider the analytical and clinical validation of the endpoints, using frameworks such as the V3 framework, encompassing device verification, analytical and clinical validation [38], security practices, data rights and governance, utility, usability, and their economic feasibility [46]. The EVIDENCE (EValuatIng connecteD sENsor teChnologiEs) checklist can be used to support technology performance evaluation [47].

Pretrial feasibility studies to evaluate suitability, establish measurement errors, and test equivalence of different technologies under consideration may be needed [38]. Feasibility studies can be expensive and time-consuming. Secondary data sources such as vendor quality documents and peer-reviewed literature validating the technologies and establishing data accuracy can be an alternative solution [48]. Published studies and publicly available datasets, preferably combining data from multiple digital health technologies, can be leveraged to compare technologies [46]. Identifying systematic measurement errors and data limitations is crucial for interpreting BYOD study data (e.g., different wristbands can impact the sensor-skin interface and, subsequently, the derived endpoints) [49]. It also informs endpoint-selection and estimates the expected data variability, which helps estimate sample size and develop the analysis plan.

Deliberation is required regarding the data capture capabilities of the technology, including the (a) type of data output (e.g., will the device provide raw data, epoch level data, summary data, or endpoint level data?); (b) the data ingestion and transfer (e.g., is this enabled via WiFi or Bluetooth, does it need a dedicated mobile hub?); (c) data storage capacity (e.g., data generation based on the chosen sampling rate, data storage before data transfer initiation, and memory characteristics); (d) data security: appropriate and up-to-date cyber security processes and procedures [46]; (e) system validation – this should include computer system validation according to international best practice [50].

Section 4: Data Management and Operations

Compliance

BYOD models have the advantage that participants carry their technology for personal use and regularly interact with it – this can be leveraged to engage participants and provide information and motivation alongside data capture. Different strategies can be deployed to maintain compliance, e.g., alerts via the study application, automated reminders, or direct text messages and phone calls from the study team. Compliance reminders can be incorporated as part of the study intervention to manage adherence to a medication or therapy as successfully deployed across several therapeutic areas, including cardiovascular [51] and respiratory disease [52]. Ongoing data monitoring strategies based on study-specific compliance algorithms can be used to alert study teams to noncompliant participants and help optimize interaction with participants and technical support. As in any study, compliance thresholds, e.g., a “valid” day consists of 10 h of wear time every 24 h [53], must be outlined a priori in the protocol and the statistical analysis plan [54].

Data Capture and Monitoring

Technology and user-related issues may affect data quality in a BYOD study. Connectivity can affect data quality and cause errors in the capture and synchronization of the data [15]. Participants may lose or change their DHTs, or the software may be upgraded during the trial, adding to potential variations in data quality. Incorrect usage (e.g., charging, updating, or non-wear/use) adds to the challenges of adequate data capture and data loss [15]. It is therefore essential to implement an automated and centralized data monitoring system [50].

Data Transmission. In contrast to studies with provisioned technology where data capture falls under the auspices of the study teams who provide sim cards, data plans, and mobile hubs where appropriate, BYOD studies rely on the participants’ own connectivity. Internet access required for data transfer may be problematic: participants may have limited data allowance, restricted by their data plan. International travel during the study could impose roaming charges or disabled data plans. Consideration should be given to contingency plans such as data uploads configured to use WiFi as the primary preference to reduce participant costs and mitigate data loss. Reimbursement plans should consider expenses associated with data transmission [26].

Data Heterogeneity. This can arise unexpectedly during the study and must be addressed during data processing and analysis. Causes of data heterogeneity include (1) changes in the software such as upgrades to the operating system and internal signal processing algorithms; (2) variations in the digital health technology: participants may be wearing different versions of the same technology allowed within eligibility criteria that differ in size and wearability characteristics (e.g., different wristbands) which may impact the accuracy of sensors and battery life; and (3) participants may change technologies during the study due to loss, malfunction or for personal reasons. This information needs to be captured by the study application. The study team should determine whether it is appropriate to incorporate this data in the analysis based on their definition of a valid dataset prior to study start as outlined in the statistical analysis plan.

Data Privacy and Security. DHTs may store personally identifiable information and personal health information. These data need to be safeguarded in accordance with local data protection requirements. Study vendors (Table 3) need to provide evidence of how the software (e.g., firmware, cloud, and study apps) influences the functionality, and the data capture is adequately protected and up to date, ensuring the data is secure [46]. The Digital Medicine Society Playbook outlines key data privacy and security considerations [55]. The Clinical Trials Transformation Initiative provides additional resources that outline approaches for securing data, including encryption, automatic backup, and user authentication [56]. Risk management plans should address data security breaches and potential interference of study tools with other applications on the participant’s technology [50]. Before the study starts, investigators should use this information to define how the data will be transmitted, secured, evaluated for completeness, and establish analysis rules to address data heterogeneity [15].

Data Sharing Considerations

Although participants continue to have access to data routinely available from their technology, investigators and sponsors may choose to share study data with participants for transparency or because participants prefer it [57]. If this is deemed appropriate, the study team needs to determine which of the study-specific data (e.g., newly derived digital endpoints) can be shared, including the frequency, timing (e.g., during study participation or at the end of the study), and the mode of communication. The Clinical Trials Transformation Initiative has suggested a decision tree for this purpose [58].

Confounding Variables and Contextual Data

Confounding variables influence results and impact the interpretation of data and study findings. Thus, these variables must be collected and accounted for in the analyses. Specific to BYOD, technology ownership duration (in months/years), and usual wear time (hours/day) can help interpret compliance and other potential sources of bias [59]. Participants’ digital literacy may be especially relevant in studies involving active assessment (such as conducting performance tasks via an app) instead of passive monitoring of activity with wearables [60].

Obtaining contextual data (i.e., relevant background information) can further assist in analyzing and interpreting the more heterogeneous BYOD data, e.g., when measuring physical activity, it is helpful to know what may impact participants’ daily activity patterns. Contextual variables that can be obtained with minimum burden to participants include employment status (full-time employment vs. retired), periods of vacation time (documented in a participant diary), weekend/weekday, hospitalization, and if their location is known, weather and seasonality data [61]. Concerns for data and personal privacy remain of utmost importance – the collection of contextual data can be made optional to participants and requires the approval of the local ethics board.

Section 5: Statistical Considerations

Sample Size Estimation

Estimating the number of participants required to address the scientific objectives of the study is an essential part of any study design [23]. The study size rationale varies by study type, and corresponding checklists guide sample sizing (Appendix 1). To address the added variability in BYOD studies, one can predetermine the minimum sample size required for subsets of the data to address specific hypotheses (e.g., based on demographics or expected compliance, by technology) in addition to the overall total sample size [48].

Sample size estimation should also account for the expected attrition and noncompliance in technology usage [62], which can significantly impact study size and feasibility of BYOD approaches. Strategies to mitigate dropout rates include: (1) including the participants’ perspective in the design of digital tools to optimize usability benefit; (2) incorporating comprehensive participant and site training; (3) having clear audio-visual instructions for every interaction; and (4) offering 24/7 technical support systems to participants and caregivers [63].

Statistical Analysis Considerations

The statistical analysis plan must include (1) detailed information on the derivation and analysis of the digital endpoints; (2) the thresholds and methods used to establish minimally clinically important differences [64, 65]; (3) multiple hypothesis testing; and methodologies to assess the accuracy, sensitivity, and specificity of any predictive models built on DHT data [48]. Specific BYOD data considerations are outlined in Table 4.

Conclusion

Today, a large percentage of the global population possesses the technology to generate health data – 85% of Americans now own a smartphone, and almost one in five regularly uses a fitness tracker [66, 67]. This presents an opportunity to use these technologies to objectively quantify human physiology, behavior, and function in the real world. How we harvest this data with the rigor required for clinical studies requires careful considerations and planning.

The BYOD model provides certain advantages over conventional studies that deploy provisioned devices, including familiarity among trial participants with their own technology such that the technology itself does not function as an intervention and the reduced burden of carrying additional devices. However, its widespread use is hampered by a lack of commonly accepted methodologies describing critical success factors and an evolving regulatory landscape. Adopting the Agency for Health Research and Quality guidelines [23], we provide the following considerations on five key aspects of the design and deployment of BYOD studies for clinical research.

Early Identification and Engagement with Internal and External Stakeholders

The input from a variety of stakeholders is key to successful technology selection and implementation. We identified internal (cross-functional team members responsible for study design, protocol development, and execution) and external (participants, caregivers, service providers, site personnel) stakeholders needed to provide input and facilitate the operationalization of BYOD models. Participants should be consulted during the study design phase when selecting outcomes, endpoints, and technologies to appropriately address their needs and preferences [68]. Focus groups [69] allow participants to test technologies before deployment and provide an opportunity to collect valuable feedback on the study design and usability of accompanying technology (e.g., study applications). This facilitates the early identification and mitigation of design and operational issues. End-of-study questionnaires that evaluate the participants’ experience in the trial can be leveraged to optimize future studies [70].

Study Design Recommendations, Including Informed Consent and Recruitment Strategies

The technology of choice should be appropriate to address study objectives and research questions. The eligibility criteria should include technological requirements to generate the data stipulated in the study objectives. The generalizability of the results should be considered very carefully as it may be impacted by the preselection of participants with access to selected technologies. There can be potential inbuilt bias if recruitment is restricted to the latest technology model. Recruitment and consenting should include various options to engage broad socioeconomic strata in compliance with local regulations. Consideration should be given to digital literacy, health conditions, and education level of participants.

Digital health technology data collected from more diverse populations can reduce bias in pharmaceutical research by making clinical trials accessible to communities that are distant from traditional clinical sites [71]. However, the access to the internet and technology is limited in underrepresented communities [72]. Mitigation strategies, such as provisioned technologies and connectivity enabling approaches, ensure that BYOD studies do not exacerbate the digital divide to the detriment of participants [15]. Transparency regarding study methodology, design, and data limitations is crucial in improving BYOD strategies and advancing all-inclusive research and development.

Outcome, Endpoint, and Technology Selection

BYOD study outcomes and endpoint-selection are governed by the same principles as traditional clinical studies, including selecting appropriate digital health technologies to generate reliable, accurate, and clinically meaningful endpoints. In addition, the technology manufacturer’s “end of life” or technology obsolescence strategy needs to be considered to ensure that the technology selected for the study does not restrict the inclusion of particular socioeconomic groups or regions where newer models may not be readily available, or older versions are no longer compatible.

The context of use is a crucial consideration because technologies are not universally optimized and some have been shown to not work as well in certain groups. E.g., the photoplethysmography sensors used to measure oxygen saturation and respiration rate are less reliable on skin types with high pigmentation [73]. Similarly, the accuracy of some wrist-worn devices to accurately measure levels of physical activity in older adults and those reliant on mobility aids have been questioned [74]. Such examples highlight the need to ensure the technology being considered has been validated in the population under consideration.

Data Management and Operations, Including Compliance and Data Monitoring

The BYOD approach needs to account for the specific challenges of data collection, including data transmission, which requires increased cooperation of participants; data heterogeneity as multiple technologies may be used along with unplanned software upgrades; and protection of personally identifiable information. As in conventional studies, participant compliance needs to be monitored with solutions to intervene if compliance falls below a defined threshold. Teams should determine appropriate strategies for sharing study data with participants.

Statistical Considerations to Align with Regulatory Requirements

The BYOD study’s statistical analysis plan should assess potential biases in the study population, variability introduced by deploying more than one type of technology, variability arising from a mixture of BYOD and provisioned technologies, and approaches to account for missing data and conduct fit-for-purpose sensitivity analyses.

In conclusion, this article aims to provide considerations for study design and technical, operational, and statistical considerations to successfully implement BYOD models in clinical research. Questions remain about the feasibility of the BYOD approach, e.g., does it provide the same insights as a provisioned device approach? Can this approach be fully operationalized in a pivotal global study? While the body of evidence does not yet exist, the field of digital health research is rapidly evolving, driven in part by the growing interest in decentralized and hybrid trials. This is exemplified by the declaration by the FDA in its 2021 draft guidance that sponsors should consider the appropriateness of participants’ own technology to collect data [30]. We anticipate that future studies will showcase examples of BYOD deployment that help refine the concepts outlined in this paper, document key learnings, and include additional considerations stemming from emerging regulatory guidelines.

Acknowledgments

This publication is a result of collaborative research performed under the auspices of the Digital Medicine Society (DiMe). Special thanks to Jennifer Goldsack, Isaac Rodriguez-Chavez, Benjamin Vandendriessche, Elizabeth L. Kunkoski, and Amir Lahav for their detailed manuscript reviews.

Conflict of Interest Statement

Dr. Charmaine Demanuele and Pirinka Georgiev are employees and shareholders of Pfizer Inc. Dr. Elena Izmailova is an employee of Koneksa Health and may own company stock. Dr. Kelly H. Zou is an employee and shareholder of Viatris Inc. Ms. Marie McCarthy is an employee of Novartis Ireland Ltd. and may own company stock. Dr. Cynthia Lokker, Dr. Emre Sezgin, Dr. Krishna Jhaveri, and Mrs. Cindy Geoghegan have no conflicts of interest to declare. The views expressed are the authors’ own and do not necessarily represent those of their employers.

Funding Sources

None of the Authors received funding in the preparation of data or the manuscript.

Author Contributions

Charmaine Demanuele, Pirinka Georgiev, Elena Izmailova, Kelly H. Zou, Marie McCarthy, Cynthia Lokker, Emre Sezgin, Krishna Jhaveri, and Cindy Geoghegan substantially contributed to the conception of this manuscript and played a role in drafting and revising the article. All authors approved the final version for publication.


Footnotes

BYOD is a colloquial term and not associated with regulated devices as defined in Section 201(h) of the Federal Food, Drug, and Cosmetic Act.



References

  1. FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) resource. Silver Spring, MD: Food and Drug Administration (US); 2016. Glossary. 2016 Jan 28 (Updated 2021 Nov 29). Available from: https://www.ncbi.nlm.nih.gov/books/NBK338448/. Co-published by Bethesda, MD: National Institutes of Health (US).
  2. Huhn S, Axt M, Gunga H-C, Maggioni MA, Munga S, Obor D, et al. The impact of wearable technologies in health research: scoping review. JMIR Mhealth Uhealth. 2022;10(1):e34384.
    External Resources
  3. Roberts DM, Schade MM, Mathew GM, Gartenberg D, Buxton OM. Detecting sleep using heart rate and motion data from multisensor consumer-grade wearables, relative to wrist actigraphy and polysomnography. Sleep. 2020;43:zsaa045.
    External Resources
  4. Buekers J, Theunis J, De Boever P, Vaes AW, Koopman M, Janssen EVM, et al. Wearable finger pulse oximetry for continuous oxygen saturation measurements during daily home routines of patients with chronic obstructive pulmonary disease (COPD) over one week: observational study. JMIR Mhealth Uhealth. 2019;7:e12866.
    External Resources
  5. Dunn J, Kidzinski L, Runge R, Witt D, Hicks JL, Schüssler-Fiorenza Rose SM, et al. Wearable sensors enable personalized predictions of clinical laboratory measurements. Nat Med. 2021;27(6):1105–12.
    External Resources
  6. Mittlesteadt J, Bambach S, Dawes A, Wentzel E, Debs A, Sezgin E, et al. Evaluation of an activity tracker to detect seizures using machine learning. J Child Neurol. 2020;35:873–8.
    External Resources
  7. Baker M, van Beek J, Gossens C. Digital health: smartphone-based monitoring of multiple sclerosis using Floodlight. 2020. Available from: https://www.nature.com/articles/d42473-019-00412-0 (accessed July 20, 2021).
  8. Manta C, Jain SS, Coravos A, Mendelsohn D, Izmailova ES. An evaluation of biometric monitoring technologies for vital signs in the era of COVID-19. Clin Transl Sci. 2020;13:1034–44.
    External Resources
  9. Byrom B, Doll H, Muehlhausen W, Flood E, Cassedy C, McDowell B, et al. Measurement equivalence of patient-reported outcome measure response scale types collected using bring your own device compared to paper and a provisioned device: results of a randomized equivalence trial. Value Health. 2018;21:581–9.
    External Resources
  10. Shahraz S, Pham TP, Gibson M, De La Cruz M, Baara M, Karnik S, et al. Does scrolling affect measurement equivalence of electronic patient-reported outcome measures (ePROM)? Results of a quantitative equivalence study. J Patient Rep Outcomes. 2021;5(1):23.
    External Resources
  11. McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67:267–77.
    External Resources
  12. Pugliese L, Woodriff M, Crowley O, Lam V, Sohn J, Bradley S. Feasibility of the “bring your own device” model in clinical research: results from a randomized controlled pilot study of a mobile patient engagement tool. Cureus. 2016;8:e535.
    External Resources
  13. Coons SJ, Eremenco S, Lundy JJ, O’Donohoe P, O’Gorman H, Malizia W. Erratum to: capturing patient-reported outcome (PRO) data electronically: the past, present, and promise of ePRO measurement in clinical trials. Patient. 2015;8:571.
    External Resources
  14. Yeomans A. The future of ePRO platforms. Applied Clinical Trials. 2014. Available from: https://www.appliedclinicaltrialsonline.com/view/future-epro-platforms (accessed February 16, 2022).
  15. Cho PJ, Yi JJ, Ho E, Shandhi MMH, Dinh YH, Patil A, et al. Demographic imbalances resulting from bring-your-own-device study design. JMIR Mhealth Uhealth. 2022;10(4):e29510.
  16. ClinicalTrials.Gov. A study on impact of canagliflozin on health status, quality of life, and functional status in heart failure. Available from: https://clinicaltrials.gov/ct2/show/NCT04252287 (accessed September 14, 2021).
  17. Zhu G, Li J, Meng Z, Yu Y, Li Y, Tang X, et al. Learning from large-scale wearable device data for predicting the epidemic trend of COVID-19. Discrete Dyn Nat Soc. 2020;2020:1.
    External Resources
  18. Radin JM, Wineinger NE, Topol EJ, Steinhubl SR. Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study. Lancet Digit Health. 2020;2(2):e85–93.
    External Resources
  19. Quer G, Radin JM, Gadaleta M, Baca-Motes K, Ariniello L, Ramos E, et al. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nat Med. 2021;27(1):73–7.
    External Resources
  20. Noga SJ, Rifkin RM, Manda S, Birhiray RE, Lyons RM, Whidden P, et al. Real-world (RW) treatment patterns and patient-related factors including quality of life (QoL), medication adherence, and actigraphy in community patients (pts) with newly diagnosed multiple myeloma (NDMM) transitioning from bortezomib (btz) to ixazomib: the US MM-6 community-based study. Blood. 2019;134(Suppl 1):3168.
    External Resources
  21. ClincialTrials.Gov. Study to evaluate the efficacy of EZC pak in adults with upper respiratory infection (URI). 2022. Available from: https://clinicaltrials.gov/ct2/show/NCT04943575 (accessed February 15. 2022).
  22. Crouthamel M, Quattrocchi E, Watts S, Wang S, Berry P, Garcia-Gancedo L, et al. Using a ResearchKit smartphone app to collect rheumatoid arthritis symptoms from real-world participants: feasibility study. JMIR Mhealth Uhealth. 2018;6:e177.
    External Resources
  23. Velentgas P, Dreyer NA, Nourjah P, Smith SR, Torchia MM, editors. Developing a protocol for observational comparative effectiveness research: a user’s guide. Rockville, MD: Agency for Healthcare Research and Quality (US); 2013.
  24. Project-Management.Info. Your guide to project management, agile and scrum. Available from: https://project-management.info/ (accessed July 21, 2021).
  25. Sieck CJ, Sheon A, Ancker JS, Castek J, Callahan B, Siefer A. Digital inclusion as a social determinant of health. NPJ Digit Med. 2021;4(1):52.
    External Resources
  26. Gwaltney C, Coons SJ, O’Donohoe P, O’Gorman H, Denomey M, Howry C, et al. “Bring your own device” (BYOD): the future of field-based patient-reported outcome data collection in clinical trials? Ther Innov Regul Sci. 2015;49:783–91.
    External Resources
  27. Critical Path Institute’s ePRO Consortium. Best practices for participant registration in clinical trials using bring your own device (BYOD) technology for data collection. 2021. Available from: https://c-path.org/wp-content/uploads/2021/04/BestPractices5.pdf (accessed February 8, 2022).
  28. Lane TS, Armin J, Gordon JS. Online recruitment methods for web-based and mobile health studies: a review of the literature. J Med Internet Res. 2015;17:e183.
    External Resources
  29. Ali SH, Foreman J, Capasso A, Jones AM, Tozan Y, DiClemente RJ. Social media as a recruitment platform for a nationwide online survey of COVID-19 knowledge, beliefs, and practices in the United States: methodology and feasibility analysis. BMC Med Res Methodol. 2020;20:116.
    External Resources
  30. US Food and Drug Administration. Digital health technologies for remote data acquisition in clinical investigations. 2022. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/digital-health-technologies-remote-data-acquisition-clinical-investigations (accessed February 14, 2022).
  31. Sage Bionetworks. Elements of informed consent. 2018. Available from: https://sagebionetworks.org/in-the-news/elements-informed-consent/ (accessed September 10, 2021).
  32. Geoghegan C, Nido V, Bemden AB, Hallinan Z, Jordan L, Kehoe LS, et al. Learning from patient and site perspectives to develop better digital health trials: recommendations from the Clinical Trials Transformation Initiative. Contemp Clin Trials Commun. 2020;19:100636.
    External Resources
  33. Iconplc.com. Wearables and digital endpoint generation: an end-to-end approach to managing wearable devices through clinical development. 2020. Available from: https://www.iconplc.com/insights/blog/2020/04/28/wearables-and-digital-end/ (accessed July 20, 2021).
  34. Manta C, Patrick-Lake B, Goldsack JC. Digital measures that matter to patients: a framework to guide the selection and development of digital measures of health. Digit Biomark. 2020;4:69–77.
    External Resources
  35. Ohri N, Kabarriti R, Bodner WR, Mehta KJ, Shankar V, Halmos B, et al. Continuous activity monitoring during concurrent chemoradiotherapy. Int J Radiat Oncol Biol Phys. 2017;97:1061–5.
    External Resources
  36. US Food and Drug Administration. Patient-focused drug development guidance: methods to identify what is important to patients and select, develop or modify fit-for-purpose clinical outcome assessments. 2018. Available from: https://www.fda.gov/drugs/news-events-human-drugs/patient-focused-drug-development-guidance-methods-identify-what-important-patients-and-select (accessed July 21, 2021).
  37. Byrom B, Watson C, Doll H, Coons SJ, Eremenco S, Ballinger R, et al. Selection of and evidentiary considerations for wearable devices and their measurements for use in regulatory decision making: recommendations from the ePRO consortium. Value Health. 2018;21:631–9.
    External Resources
  38. Goldsack JC, Coravos A, Bakker JP, Bent B, Dowling AV, Fitzer-Attas C, et al. Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for Biometric Monitoring Technologies (BioMeTs). NPJ Digit Med. 2020;3:55.
    External Resources
  39. Hays RD, Woolley JM. The concept of clinically meaningful difference in health-related quality-of-life research. How meaningful is it? Pharmacoeconomics. 2000;18:419–23.
    External Resources
  40. Landers M, Dorsey R, Saria S. Digital endpoints: definition, benefits, and current barriers in accelerating development and adoption. Digit Biomark. 2021 Sep–Dec;5(3):216–23.
    External Resources
  41. Godfrey A, Vandendriessche B, Bakker JP, Fitzer-Attas C, Gujar N, Hobbs M, et al. Fit-for-purpose biometric monitoring technologies: leveraging the laboratory biomarker experience. Clin Transl Sci. 2021;14(1):62–74.
    External Resources
  42. Jones L, Tan L, Carey-Jones S, Riddell N, Davies R, Brownsdon A, et al. Can wearable technology be used to approximate cardiopulmonary exercise testing metrics? Perioper Med. 2021;10:9.
  43. Mahadevan N, Demanuele C, Zhang H, Volfson D, Ho B, Erb MK, et al. Development of digital biomarkers for resting tremor and bradykinesia using a wrist-worn wearable device. NPJ Digit Med. 2020;3:5.
    External Resources
  44. Mahadevan N, Christakis Y, Di J, Bruno J, Zhang Y, Dorsey ER, et al. Development of digital measures for nighttime scratch and sleep using wrist-worn wearable devices. NPJ Digit Med. 2021;4(1):42.
    External Resources
  45. Psaltos D, Chappie K, Karahanoglu FI, Chasse R, Demanuele C, Kelekar A, et al. Multimodal wearable sensors to measure gait and voice. Digit Biomark. 2019;3:133–44.
    External Resources
  46. Coravos A, Doerr M, Goldsack J, Manta C, Shervey M, Woods B, et al. Modernizing and designing evaluation frameworks for connected sensor technologies in medicine. NPJ Digit Med. 2020;3:37.
    External Resources
  47. Manta C, Mahadevan N, Bakker J, Ozen Irmak S, Izmailova E, Park S, et al. EVIDENCE publication checklist for studies evaluating connected sensor technologies: explanation and elaboration. Digit Biomark. 2021 May–Aug;5(2):127–47.
    External Resources
  48. Hicks JL, Althoff T, Sosic R, Kuhar P, Bostjancic B, King AC, et al. Best practices for analyzing large-scale health data from wearables and smartphone apps. NPJ Digit Med. 2019;2:45.
    External Resources
  49. Castaneda D, Esparza A, Ghamari M, Soltanpur C, Nazeran H. A review on wearable photoplethysmography sensors and their potential future applications in health care. Int J Biosens Bioelectron. 2018;4:195–202.
    External Resources
  50. European Medicines Agency. Questions and answers: qualification of digital technology-based methodologies to support approval of medicinal products. 2020. Available from: https://www.ema.europa.eu/en/documents/other/questions-answers-qualification-digital-technology-based-methodologies-support-approval-medicinal_en.pdf.
  51. Volpp KG, Troxel AB, Mehta SJ, Norton L, Zhu J, Lim R, et al. Effect of electronic reminders, financial incentives, and social support on outcomes after myocardial infarction: the HeartStrong randomized clinical trial. JAMA Intern Med. 2017;177:1093–101.
    External Resources
  52. Blakey JD, Bender BG, Dima AL, Weinman J, Safioti G, Costello RW. Digital technologies and adherence in respiratory diseases: the road ahead. Eur Respir J. 2018;52:1801147.
    External Resources
  53. Byrom B, Rowe DA. Measuring free-living physical activity in COPD patients: deriving methodology standards for clinical trials through a review of research studies. Contemp Clin Trials. 2016;47:172–84.
    External Resources
  54. Izmailova ES, Wagner JA, Perakslis ED. Wearable devices in clinical trials: hype and hypothesis. Clin Pharmacol Ther. 2018;104:42–52.
    External Resources
  55. The Playbook. Digital clinical measures. 2021. Available from: https://playbook.dimesociety.org/ (accessed September 10, 2021).
  56. Clinical Trials Transformation Initiative. CTTI Considerations for Advancing the Use of Digital Technologies for Data Capture & Improved Clinical Trials. 2021. Available from: https://ctti-clinicaltrials.org/wp-content/uploads/2021/06/CTTI_Digital_Health_Technologies_Recs.pdf (accessed 2022).
  57. Riggare S, Stamford J, Hägglund M. A long way to go: patient perspectives on digital health for Parkinson’s disease. J Parkinsons Dis. 2021;11(s1):S5–10.
    External Resources
  58. Coran P, Goldsack JC, Grandinetti CA, Bakker JP, Bolognese M, Dorsey E, et al. Advancing the use of mobile technologies in clinical trials: recommendations from the Clinical Trials Transformation Initiative. Digit Biomark. 2019;3:145–54.
    External Resources
  59. Mishra T, Wang M, Metwally AA, Bogu GK, Brooks AW, Bahmani A, et al. Pre-symptomatic detection of COVID-19 from smartwatch data. Nat Biomed Eng. 2020;4:1208–20.
    External Resources
  60. Roque NA, Boot WR. A new tool for assessing mobile device proficiency in older adults: the mobile device proficiency questionnaire. J Appl Gerontol. 2018;37:131–56.
    External Resources
  61. Aral S, Nicolaides C. Exercise contagion in a global social network. Nat Commun. 2017;8:14753.
    External Resources
  62. Meyerowitz-Katz G, Ravi S, Arnolda L, Feng X, Maberly G, Astell-Burt T. Rates of attrition and dropout in app-based interventions for chronic disease: systematic review and meta-analysis. J Med Internet Res. 2020;22:e20283.
    External Resources
  63. Bloem BR, Marks WJ Jr, Silva de Lima AL, Kuijf ML, van Laar T, Jacobs BPF, et al. The personalized Parkinson project: examining disease progression through broad biomarkers in early Parkinson’s disease. BMC Neurol. 2019;19:160.
    External Resources
  64. Staunton H, Willgoss T, Nelsen L, Burbridge C, Sully K, Rofail D, et al. An overview of using qualitative techniques to explore and define estimates of clinically important change on clinical outcome assessments. J Patient Rep Outcomes. 2019;3:16.
    External Resources
  65. McDonald CM, Henricson EK, Abresch R, Florence JM, Eagle M, Gappmaier E, et al. The 6-minute walk test and other endpoints in Duchenne muscular dystrophy: longitudinal natural history observations over 48 weeks from a multicenter study. Muscle Nerve. 2013;48:343–56.
    External Resources
  66. Pew Research Center. Demographics of mobile device ownership and adoption in the United States. 2021. Available from: https://www.pewresearch.org/internet/fact-sheet/mobile/ (accessed July 21, 2021).
  67. Pew Research Center. About one-in-five Americans use a smart watch or fitness tracker. 2020. Available from: https://www.pewresearch.org/fact-tank/2020/01/09/about-one-in-five-americans-use-a-smart-watch-or-fitness-tracker/ (accessed August 24, 2021).
  68. Anderson M, Kimberly McCleary K. On the path to a science of patient input. Sci Transl Med. 2016;8:336ps11.
    External Resources
  69. Perry B, Dombeck C, Smalley JB, Levitan B, Leventhal D, Patrick-Lake B, et al. Development and application of a patient group engagement prioritization tool for use in medical product development. Ther Innov Regul Sci. 2021;55(2):324–35.
    External Resources
  70. Brohan E, Bonner N, Turnbull A, Khan S, Dewit O, Thomas G, et al. Development of a patient-led end of study questionnaire to evaluate the experience of clinical trial participation. Value Health. 2014;17:A649.
    External Resources
  71. National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Roundtable on Genomics and Precision Health; Forum on Drug Discovery, Development, and Translation. The role of digital health technologies in drug development: proceedings of a workshop. Washington, DC: National Academies Press; 2020.
  72. Pew Research Center. Home broadband adoption, computer ownership vary by race, ethnicity in the U.S. 2021. Available from: https://www.pewresearch.org/fact-tank/2021/07/16/home-broadband-adoption-computer-ownership-vary-by-race-ethnicity-in-the-u-s/ (accessed August 24, 2021).
  73. Sjoding MW, Dickson RP, Iwashyna TJ, Gay SE, Valley TS. Racial bias in pulse oximetry measurement. N Engl J Med. 2020;383(25):2477–8.
    External Resources
  74. Tedesco S, Sica M, Ancillao A, Timmons S, Barton J, O’Flynn B. Accuracy of consumer-level and research-grade activity trackers in ambulatory settings in older adults. PLoS One. 2019;14:e0216891.
    External Resources
  75. Alemayehu D, Cappelleri JC, Emir B, Zou KH, editors. Statistical topics in health economics and outcomes research. New York: CRC Press; 2017.


Author Contacts

Marie McCarthy, marie-1.mccarthy@novartis.com


Article / Publication Details

First-Page Preview
Abstract of Tools and Devices - Review Article

Received: November 05, 2021
Accepted: April 07, 2022
Published online: July 04, 2022
Issue release date: May - August

Number of Print Pages: 14
Number of Figures: 2
Number of Tables: 4


eISSN: 2504-110X (Online)

For additional information: https://beta.karger.com/DIB


Open Access License / Drug Dosage / Disclaimer

This article is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC). Usage and distribution for commercial purposes requires written permission. Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug. Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.

References

  1. FDA-NIH Biomarker Working Group. BEST (Biomarkers, EndpointS, and other Tools) resource. Silver Spring, MD: Food and Drug Administration (US); 2016. Glossary. 2016 Jan 28 (Updated 2021 Nov 29). Available from: https://www.ncbi.nlm.nih.gov/books/NBK338448/. Co-published by Bethesda, MD: National Institutes of Health (US).
  2. Huhn S, Axt M, Gunga H-C, Maggioni MA, Munga S, Obor D, et al. The impact of wearable technologies in health research: scoping review. JMIR Mhealth Uhealth. 2022;10(1):e34384.
    External Resources
  3. Roberts DM, Schade MM, Mathew GM, Gartenberg D, Buxton OM. Detecting sleep using heart rate and motion data from multisensor consumer-grade wearables, relative to wrist actigraphy and polysomnography. Sleep. 2020;43:zsaa045.
    External Resources
  4. Buekers J, Theunis J, De Boever P, Vaes AW, Koopman M, Janssen EVM, et al. Wearable finger pulse oximetry for continuous oxygen saturation measurements during daily home routines of patients with chronic obstructive pulmonary disease (COPD) over one week: observational study. JMIR Mhealth Uhealth. 2019;7:e12866.
    External Resources
  5. Dunn J, Kidzinski L, Runge R, Witt D, Hicks JL, Schüssler-Fiorenza Rose SM, et al. Wearable sensors enable personalized predictions of clinical laboratory measurements. Nat Med. 2021;27(6):1105–12.
    External Resources
  6. Mittlesteadt J, Bambach S, Dawes A, Wentzel E, Debs A, Sezgin E, et al. Evaluation of an activity tracker to detect seizures using machine learning. J Child Neurol. 2020;35:873–8.
    External Resources
  7. Baker M, van Beek J, Gossens C. Digital health: smartphone-based monitoring of multiple sclerosis using Floodlight. 2020. Available from: https://www.nature.com/articles/d42473-019-00412-0 (accessed July 20, 2021).
  8. Manta C, Jain SS, Coravos A, Mendelsohn D, Izmailova ES. An evaluation of biometric monitoring technologies for vital signs in the era of COVID-19. Clin Transl Sci. 2020;13:1034–44.
    External Resources
  9. Byrom B, Doll H, Muehlhausen W, Flood E, Cassedy C, McDowell B, et al. Measurement equivalence of patient-reported outcome measure response scale types collected using bring your own device compared to paper and a provisioned device: results of a randomized equivalence trial. Value Health. 2018;21:581–9.
    External Resources
  10. Shahraz S, Pham TP, Gibson M, De La Cruz M, Baara M, Karnik S, et al. Does scrolling affect measurement equivalence of electronic patient-reported outcome measures (ePROM)? Results of a quantitative equivalence study. J Patient Rep Outcomes. 2021;5(1):23.
    External Resources
  11. McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67:267–77.
    External Resources
  12. Pugliese L, Woodriff M, Crowley O, Lam V, Sohn J, Bradley S. Feasibility of the “bring your own device” model in clinical research: results from a randomized controlled pilot study of a mobile patient engagement tool. Cureus. 2016;8:e535.
    External Resources
  13. Coons SJ, Eremenco S, Lundy JJ, O’Donohoe P, O’Gorman H, Malizia W. Erratum to: capturing patient-reported outcome (PRO) data electronically: the past, present, and promise of ePRO measurement in clinical trials. Patient. 2015;8:571.
    External Resources
  14. Yeomans A. The future of ePRO platforms. Applied Clinical Trials. 2014. Available from: https://www.appliedclinicaltrialsonline.com/view/future-epro-platforms (accessed February 16, 2022).
  15. Cho PJ, Yi JJ, Ho E, Shandhi MMH, Dinh YH, Patil A, et al. Demographic imbalances resulting from bring-your-own-device study design. JMIR Mhealth Uhealth. 2022;10(4):e29510.
  16. ClinicalTrials.Gov. A study on impact of canagliflozin on health status, quality of life, and functional status in heart failure. Available from: https://clinicaltrials.gov/ct2/show/NCT04252287 (accessed September 14, 2021).
  17. Zhu G, Li J, Meng Z, Yu Y, Li Y, Tang X, et al. Learning from large-scale wearable device data for predicting the epidemic trend of COVID-19. Discrete Dyn Nat Soc. 2020;2020:1.
    External Resources
  18. Radin JM, Wineinger NE, Topol EJ, Steinhubl SR. Harnessing wearable device data to improve state-level real-time surveillance of influenza-like illness in the USA: a population-based study. Lancet Digit Health. 2020;2(2):e85–93.
    External Resources
  19. Quer G, Radin JM, Gadaleta M, Baca-Motes K, Ariniello L, Ramos E, et al. Wearable sensor data and self-reported symptoms for COVID-19 detection. Nat Med. 2021;27(1):73–7.
    External Resources
  20. Noga SJ, Rifkin RM, Manda S, Birhiray RE, Lyons RM, Whidden P, et al. Real-world (RW) treatment patterns and patient-related factors including quality of life (QoL), medication adherence, and actigraphy in community patients (pts) with newly diagnosed multiple myeloma (NDMM) transitioning from bortezomib (btz) to ixazomib: the US MM-6 community-based study. Blood. 2019;134(Suppl 1):3168.
    External Resources
  21. ClincialTrials.Gov. Study to evaluate the efficacy of EZC pak in adults with upper respiratory infection (URI). 2022. Available from: https://clinicaltrials.gov/ct2/show/NCT04943575 (accessed February 15. 2022).
  22. Crouthamel M, Quattrocchi E, Watts S, Wang S, Berry P, Garcia-Gancedo L, et al. Using a ResearchKit smartphone app to collect rheumatoid arthritis symptoms from real-world participants: feasibility study. JMIR Mhealth Uhealth. 2018;6:e177.
    External Resources
  23. Velentgas P, Dreyer NA, Nourjah P, Smith SR, Torchia MM, editors. Developing a protocol for observational comparative effectiveness research: a user’s guide. Rockville, MD: Agency for Healthcare Research and Quality (US); 2013.
  24. Project-Management.Info. Your guide to project management, agile and scrum. Available from: https://project-management.info/ (accessed July 21, 2021).
  25. Sieck CJ, Sheon A, Ancker JS, Castek J, Callahan B, Siefer A. Digital inclusion as a social determinant of health. NPJ Digit Med. 2021;4(1):52.
    External Resources
  26. Gwaltney C, Coons SJ, O’Donohoe P, O’Gorman H, Denomey M, Howry C, et al. “Bring your own device” (BYOD): the future of field-based patient-reported outcome data collection in clinical trials? Ther Innov Regul Sci. 2015;49:783–91.
    External Resources
  27. Critical Path Institute’s ePRO Consortium. Best practices for participant registration in clinical trials using bring your own device (BYOD) technology for data collection. 2021. Available from: https://c-path.org/wp-content/uploads/2021/04/BestPractices5.pdf (accessed February 8, 2022).
  28. Lane TS, Armin J, Gordon JS. Online recruitment methods for web-based and mobile health studies: a review of the literature. J Med Internet Res. 2015;17:e183.
    External Resources
  29. Ali SH, Foreman J, Capasso A, Jones AM, Tozan Y, DiClemente RJ. Social media as a recruitment platform for a nationwide online survey of COVID-19 knowledge, beliefs, and practices in the United States: methodology and feasibility analysis. BMC Med Res Methodol. 2020;20:116.
    External Resources
  30. US Food and Drug Administration. Digital health technologies for remote data acquisition in clinical investigations. 2022. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/digital-health-technologies-remote-data-acquisition-clinical-investigations (accessed February 14, 2022).
  31. Sage Bionetworks. Elements of informed consent. 2018. Available from: https://sagebionetworks.org/in-the-news/elements-informed-consent/ (accessed September 10, 2021).
  32. Geoghegan C, Nido V, Bemden AB, Hallinan Z, Jordan L, Kehoe LS, et al. Learning from patient and site perspectives to develop better digital health trials: recommendations from the Clinical Trials Transformation Initiative. Contemp Clin Trials Commun. 2020;19:100636.
    External Resources
  33. Iconplc.com. Wearables and digital endpoint generation: an end-to-end approach to managing wearable devices through clinical development. 2020. Available from: https://www.iconplc.com/insights/blog/2020/04/28/wearables-and-digital-end/ (accessed July 20, 2021).
  34. Manta C, Patrick-Lake B, Goldsack JC. Digital measures that matter to patients: a framework to guide the selection and development of digital measures of health. Digit Biomark. 2020;4:69–77.
    External Resources
  35. Ohri N, Kabarriti R, Bodner WR, Mehta KJ, Shankar V, Halmos B, et al. Continuous activity monitoring during concurrent chemoradiotherapy. Int J Radiat Oncol Biol Phys. 2017;97:1061–5.
    External Resources
  36. US Food and Drug Administration. Patient-focused drug development guidance: methods to identify what is important to patients and select, develop or modify fit-for-purpose clinical outcome assessments. 2018. Available from: https://www.fda.gov/drugs/news-events-human-drugs/patient-focused-drug-development-guidance-methods-identify-what-important-patients-and-select (accessed July 21, 2021).
  37. Byrom B, Watson C, Doll H, Coons SJ, Eremenco S, Ballinger R, et al. Selection of and evidentiary considerations for wearable devices and their measurements for use in regulatory decision making: recommendations from the ePRO consortium. Value Health. 2018;21:631–9.
    External Resources
  38. Goldsack JC, Coravos A, Bakker JP, Bent B, Dowling AV, Fitzer-Attas C, et al. Verification, analytical validation, and clinical validation (V3): the foundation of determining fit-for-purpose for Biometric Monitoring Technologies (BioMeTs). NPJ Digit Med. 2020;3:55.
    External Resources
  39. Hays RD, Woolley JM. The concept of clinically meaningful difference in health-related quality-of-life research. How meaningful is it? Pharmacoeconomics. 2000;18:419–23.
    External Resources
  40. Landers M, Dorsey R, Saria S. Digital endpoints: definition, benefits, and current barriers in accelerating development and adoption. Digit Biomark. 2021 Sep–Dec;5(3):216–23.
    External Resources
  41. Godfrey A, Vandendriessche B, Bakker JP, Fitzer-Attas C, Gujar N, Hobbs M, et al. Fit-for-purpose biometric monitoring technologies: leveraging the laboratory biomarker experience. Clin Transl Sci. 2021;14(1):62–74.
    External Resources
  42. Jones L, Tan L, Carey-Jones S, Riddell N, Davies R, Brownsdon A, et al. Can wearable technology be used to approximate cardiopulmonary exercise testing metrics? Perioper Med. 2021;10:9.
  43. Mahadevan N, Demanuele C, Zhang H, Volfson D, Ho B, Erb MK, et al. Development of digital biomarkers for resting tremor and bradykinesia using a wrist-worn wearable device. NPJ Digit Med. 2020;3:5.
    External Resources
  44. Mahadevan N, Christakis Y, Di J, Bruno J, Zhang Y, Dorsey ER, et al. Development of digital measures for nighttime scratch and sleep using wrist-worn wearable devices. NPJ Digit Med. 2021;4(1):42.
    External Resources
  45. Psaltos D, Chappie K, Karahanoglu FI, Chasse R, Demanuele C, Kelekar A, et al. Multimodal wearable sensors to measure gait and voice. Digit Biomark. 2019;3:133–44.
    External Resources
  46. Coravos A, Doerr M, Goldsack J, Manta C, Shervey M, Woods B, et al. Modernizing and designing evaluation frameworks for connected sensor technologies in medicine. NPJ Digit Med. 2020;3:37.
    External Resources
  47. Manta C, Mahadevan N, Bakker J, Ozen Irmak S, Izmailova E, Park S, et al. EVIDENCE publication checklist for studies evaluating connected sensor technologies: explanation and elaboration. Digit Biomark. 2021 May–Aug;5(2):127–47.
    External Resources
  48. Hicks JL, Althoff T, Sosic R, Kuhar P, Bostjancic B, King AC, et al. Best practices for analyzing large-scale health data from wearables and smartphone apps. NPJ Digit Med. 2019;2:45.
    External Resources
  49. Castaneda D, Esparza A, Ghamari M, Soltanpur C, Nazeran H. A review on wearable photoplethysmography sensors and their potential future applications in health care. Int J Biosens Bioelectron. 2018;4:195–202.
    External Resources
  50. European Medicines Agency. Questions and answers: qualification of digital technology-based methodologies to support approval of medicinal products. 2020. Available from: https://www.ema.europa.eu/en/documents/other/questions-answers-qualification-digital-technology-based-methodologies-support-approval-medicinal_en.pdf.
  51. Volpp KG, Troxel AB, Mehta SJ, Norton L, Zhu J, Lim R, et al. Effect of electronic reminders, financial incentives, and social support on outcomes after myocardial infarction: the HeartStrong randomized clinical trial. JAMA Intern Med. 2017;177:1093–101.
    External Resources
  52. Blakey JD, Bender BG, Dima AL, Weinman J, Safioti G, Costello RW. Digital technologies and adherence in respiratory diseases: the road ahead. Eur Respir J. 2018;52:1801147.
    External Resources
  53. Byrom B, Rowe DA. Measuring free-living physical activity in COPD patients: deriving methodology standards for clinical trials through a review of research studies. Contemp Clin Trials. 2016;47:172–84.
    External Resources
  54. Izmailova ES, Wagner JA, Perakslis ED. Wearable devices in clinical trials: hype and hypothesis. Clin Pharmacol Ther. 2018;104:42–52.
    External Resources
  55. The Playbook. Digital clinical measures. 2021. Available from: https://playbook.dimesociety.org/ (accessed September 10, 2021).
  56. Clinical Trials Transformation Initiative. CTTI Considerations for Advancing the Use of Digital Technologies for Data Capture & Improved Clinical Trials. 2021. Available from: https://ctti-clinicaltrials.org/wp-content/uploads/2021/06/CTTI_Digital_Health_Technologies_Recs.pdf (accessed 2022).
  57. Riggare S, Stamford J, Hägglund M. A long way to go: patient perspectives on digital health for Parkinson’s disease. J Parkinsons Dis. 2021;11(s1):S5–10.
    External Resources
  58. Coran P, Goldsack JC, Grandinetti CA, Bakker JP, Bolognese M, Dorsey E, et al. Advancing the use of mobile technologies in clinical trials: recommendations from the Clinical Trials Transformation Initiative. Digit Biomark. 2019;3:145–54.
    External Resources
  59. Mishra T, Wang M, Metwally AA, Bogu GK, Brooks AW, Bahmani A, et al. Pre-symptomatic detection of COVID-19 from smartwatch data. Nat Biomed Eng. 2020;4:1208–20.
    External Resources
  60. Roque NA, Boot WR. A new tool for assessing mobile device proficiency in older adults: the mobile device proficiency questionnaire. J Appl Gerontol. 2018;37:131–56.
    External Resources
  61. Aral S, Nicolaides C. Exercise contagion in a global social network. Nat Commun. 2017;8:14753.
    External Resources
  62. Meyerowitz-Katz G, Ravi S, Arnolda L, Feng X, Maberly G, Astell-Burt T. Rates of attrition and dropout in app-based interventions for chronic disease: systematic review and meta-analysis. J Med Internet Res. 2020;22:e20283.
    External Resources
  63. Bloem BR, Marks WJ Jr, Silva de Lima AL, Kuijf ML, van Laar T, Jacobs BPF, et al. The personalized Parkinson project: examining disease progression through broad biomarkers in early Parkinson’s disease. BMC Neurol. 2019;19:160.
    External Resources
  64. Staunton H, Willgoss T, Nelsen L, Burbridge C, Sully K, Rofail D, et al. An overview of using qualitative techniques to explore and define estimates of clinically important change on clinical outcome assessments. J Patient Rep Outcomes. 2019;3:16.
    External Resources
  65. McDonald CM, Henricson EK, Abresch R, Florence JM, Eagle M, Gappmaier E, et al. The 6-minute walk test and other endpoints in Duchenne muscular dystrophy: longitudinal natural history observations over 48 weeks from a multicenter study. Muscle Nerve. 2013;48:343–56.
    External Resources
  66. Pew Research Center. Demographics of mobile device ownership and adoption in the United States. 2021. Available from: https://www.pewresearch.org/internet/fact-sheet/mobile/ (accessed July 21, 2021).
  67. Pew Research Center. About one-in-five Americans use a smart watch or fitness tracker. 2020. Available from: https://www.pewresearch.org/fact-tank/2020/01/09/about-one-in-five-americans-use-a-smart-watch-or-fitness-tracker/ (accessed August 24, 2021).
  68. Anderson M, Kimberly McCleary K. On the path to a science of patient input. Sci Transl Med. 2016;8:336ps11.
    External Resources
  69. Perry B, Dombeck C, Smalley JB, Levitan B, Leventhal D, Patrick-Lake B, et al. Development and application of a patient group engagement prioritization tool for use in medical product development. Ther Innov Regul Sci. 2021;55(2):324–35.
    External Resources
  70. Brohan E, Bonner N, Turnbull A, Khan S, Dewit O, Thomas G, et al. Development of a patient-led end of study questionnaire to evaluate the experience of clinical trial participation. Value Health. 2014;17:A649.
    External Resources
  71. National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Health Sciences Policy; Roundtable on Genomics and Precision Health; Forum on Drug Discovery, Development, and Translation. The role of digital health technologies in drug development: proceedings of a workshop. Washington, DC: National Academies Press; 2020.
  72. Pew Research Center. Home broadband adoption, computer ownership vary by race, ethnicity in the U.S. 2021. Available from: https://www.pewresearch.org/fact-tank/2021/07/16/home-broadband-adoption-computer-ownership-vary-by-race-ethnicity-in-the-u-s/ (accessed August 24, 2021).
  73. Sjoding MW, Dickson RP, Iwashyna TJ, Gay SE, Valley TS. Racial bias in pulse oximetry measurement. N Engl J Med. 2020;383(25):2477–8.
    External Resources
  74. Tedesco S, Sica M, Ancillao A, Timmons S, Barton J, O’Flynn B. Accuracy of consumer-level and research-grade activity trackers in ambulatory settings in older adults. PLoS One. 2019;14:e0216891.
    External Resources
  75. Alemayehu D, Cappelleri JC, Emir B, Zou KH, editors. Statistical topics in health economics and outcomes research. New York: CRC Press; 2017.
ppt logo Download Figures (.pptx)


Figures
Thumbnail
Thumbnail

Tables
Thumbnail
Thumbnail
Thumbnail
Thumbnail
Stay Up to Date Banner Stay Up to Date Banner
TOP