Assessing the Impact of Online GCP Training on CRC Perceptions of Adverse Events

Clinical Researcher—February 2019 (Volume 33, Issue 2)

PEER REVIEWED

Linda S. Behar-Horenstein, PhD; Lissette Tolentino; Huan Kuang; Wajeeh Bajwa, PhD; H. Robert Kolb, RN, MS, CCRC

 

 

In response to the growing complexities of clinical research as it converges with new technologies and regulatory intricacies, the International Council for Harmonization (ICH) published the Integrated Addendum to its Guidelines for Good Clinical Practice (GCP) in 2016.{1} This amended guideline was intended to address the impacts of the online era on study conduct, while preserving the elements of human subject protection and data integrity.

The ICH GCP E6(R2) addendum offers a unified international standard providing reassurance that “the rights, safety, and well-being of trial subjects are protected.”{1} Key components of these GCP guidelines and protections emphasize the investigator’s trial-related responsibilities and, within this context, includes the reporting of adverse events (AEs).

Many institutions, such as those with study sites participating in the U.S. National Institutes of Health (NIH) Clinical and Translational Science Awards (CTSA) program, rely on standardized online GCP training systems that are provided by the Collaborative Institutional Training Initiative program (CITI) and the Association of Clinical Research Professionals (ACRP). The CITI and ACRP platforms are two of the dominant online platforms designed to equip learners with GCP core concepts.

GCP training is also part of a growing movement toward establishing a set of core competencies from which to build standardized didactic curriculum.{2} These core competencies have been vetted by investigators with the CTSA hubs in the Enhancing Clinical Research Professionals’ Training and Qualification (ECRPTQ) project, and subsequently have been accepted by the NIH’s National Center for Advancing Translational Science (NCATS).{3}

ECRPTQ has two central goals; one has been to implement a standardized training process for professionals involved in CTSA clinical research. The second goal has been to advocate for a collaborative approach leading to the development of consistent training and qualification strategies to generate additional best GCP practices across academic institutions.{3} As a result of this work, the NIH mandated that all of its funded investigators and clinical trials research staff be trained in GCP.

Learning About Learning

Little is known about the effectiveness of the dominant CITI and ACRP GCP online learning platforms, or how they impact core GCP competency. Whether clinical research coordinator (CRC) interactions with their principal investigators (PIs) on reporting AEs is better facilitated through online teaching or structured work experience and mentoring has not been shown. For this study, “online learning” refers to module-driven training sessions absent real-time interaction.

Eight research questions were analyzed in this study to determine the efficacy of online training in influencing CRC perceptions of, and their actions toward, handling AEs. The research questions are as follows:

  1. What is the relationship between the frequency of CRC reporting AEs to PIs when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: Observing AE to PI by Demographics.
  2. What is the relationship between bringing AEs to the PI’s attention (Yes/No) when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: Bringing AE to PI Attention by Demographics.
  3. What is the relationship between discussing the AE with the PI (1) verbally, (2) via e-mail, or (3) through both methods when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: Format of Discussing AE with PI by Demographics.
  4. What is the relationship between PI’s response/action to the CRC reports of AEs when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: PI Response to AE by Demographics.
  5. What is the relationship between whether the PI took CRC’s observation seriously and made the appropriate changes when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: PI Actionable Response to AE by Demographics.
  6. What is the relationship between whether the PI simply acknowledged the CRC’s concern, but did not act on it when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: PI Non-Actionable Response to AE by Demographics.
  7. What is the relationship between the PI rejecting CRC observation of the AE when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as: PI Rejection to AE by Demographics.
  8. What is the relationship between how CRCs would handle a situation where an AE was observed, but the PI did not take any action when compared by (a) coordinator primary responsibility, (b) training background, and (c) length of time involved in clinical research? Hereafter referred to as CRC Future Response to PI’s Rejection to AE by Demographics.

Experiential Learning Theory

This study is grounded in the Experiential Learning Theory,{4} which posits that knowledge and skill acquisition depends on a cycle of experiential learning (see Figure 1). Its underlying premise is that learning evolves via four types of engagement:

1) Experiential (i.e., concrete experience), whereby CRCs gather information from the world (e.g., observations of inappropriate methods of informed consent);

2) Reflective (i.e., reflective observation), whereby CRCs take time to think, process, organize, and relate inputs to other known factors that surround that experience (e.g., determining whether risk to the participant was great or immediate);

3) Abstract (i.e., abstract conceptualization), whereby CRCs create new meanings from developing unique ways of looking at existing information (e.g., how recording data incorrectly can jeopardize study integrity); and

4) Action (i.e., active experimentation), whereby CRCs actively test a hypothesis (e.g., responding to ethical misconduct, “I can test my emergent hypothesis that reporting it to an anonymous source leads to appropriate resolution”).

The ELT promotes reflective conversation (or executive consciousness) that helps to enable learners to shape responses to the goals of the project (e.g., creating a conversational space for members to reflect on their experiences).{5,6} ELT stimulates sharing functional leadership,{7} whereby personal needs are replaced by shared roles necessary for meeting project goals. Kolb showed that training groups or teams are cultivated by sharing experiences and reflecting on the meaning of those experiences together.{8}

 

Figure 1: Experiential Learning Theory (ELT) Cycle{8}

 

Methods

Participants completed one GCP online training program developed by CITI and another one developed by ACRP.

The CITI GCP training includes basic courses tailored to the different types of clinical research.{9} Refresher courses are also offered for retraining and advanced learning. The CITI program offers several GCP courses that satisfy the 2016 NIH policy.{10}

ICH GCP E6 Investigator Site Training courses from CITI also meet the minimum criteria for ICH GCP Investigator Site Personnel Training identified by TransCelerate BioPharma, Inc to enable mutual recognition of GCP training among trial sponsors. These courses, written and peer-reviewed by experts, have been updated to include ICH E6(R2) standards.{1,11}

Suggested audiences for the CITI GCP courses include IRB members, PIs, CRCs, research nurses, clinical research organization (CRO) staff, and other key study personnel based at study sites and sponsors.{9}

The ACRP GCP course is for all clinical research professionals.{12} This course is preparatory for those engaging in clinical research and a good review for seasoned professionals. It addresses the globally accepted standard for conducting ethical and scientifically sound research, and for ensuring the use of universal language related to the conduct of clinical research.

The interactive ACRP online course incorporates real-world scenarios that the learner is likely to encounter during a clinical trial. This training also meets the minimum criteria for ICH GCP Investigator Site Personnel Training as identified by TransCelerate BioPharma, Inc.{11}

Learning objectives of the ACRP course are{12}:

  1. List the key drivers that led to the formation of the ICH and its focus on GCP.
  2. Explain the key considerations to be made with regard to GCP during a clinical trial.
  3. Describe the roles and responsibilities of a sponsor, an investigator, and the IRB or institutional ethics committee.
  4. Explain the AE reporting requirements for both the sponsor and the investigator.
  5. List the core requirements for securing informed consent from study participants.
  6. Describe the importance of protocol compliance and clear documentation in the clinical trial process.
  7. Define the purpose of various documents and templates that members use in clinical trials.

Each of  these training modules are based on the ICH Harmonized Tripartite Guideline – Clinical Safety Data Management: Definitions and Standards for Expedited Reporting and International Council for Harmonization (ICH) Harmonized Guideline: Integrated Addendum to ICH E6(R1): Guideline for Good Clinical Practice E6(R2), and they address E8, which includes safety reporting. More specifically, the training modules address Classifying Adverse Events, Investigator Reporting Requirements, Monitoring/Reporting Requirements for Sponsors, and Differences in Reporting AEs to Sponsors/IRBs.

“The objectives of reporting AEs are to identify new risk information as early as possible and to develop a profile of the drug. Investigators must immediately report [serious AEs], and sponsors must have processes in place to evaluate the events. Once new risk information is identified, sponsors are required to report the information to regulators and stakeholders, and changes to the trial should be implemented, when appropriate, to reduce the risks associated with the trial.”{9}

Following the receipt of IRB approval, study participants were recruited by sending e-mails to the research community’s listservs and by placing posters on campus. The participant selection criteria were designated as people involved in human subject research at a large, CTSA-designated public university in the Southeastern United States, without differentiation for gender and race.

Volunteers (n=132) at any level indicated a willingness to take part in the training programs. Of those, 95 participated (72%) by completing the training program online as well as the pre- and post-survey in Research Electronic Data Capture (REDCap), a secure, web-based application designed to support the traditional case report form data capture. After finishing the training program and the pre- and post-surveys, the participants were contacted via e-mail six months later for a follow-up. Forty participants who finished the six-month follow-up survey constitute the dataset; the response rate was 42.1%.

The data did not meet the assumption of normality for parametric testing. Thus, the Wilcoxon Signed Rank and McNemar tests were conducted to determine if there were pre- and six-month follow-up test differences in the frequency of CRC reporting AEs, the actions taken by the PIs, and how CRCs would handle reporting AEs differently in the future. The Wilcoxon Signed Rank test was used to determine if there were any significant rank differences among pre- and six-month follow-up test for questions, “Reporting AE to PI by Demographics,” “Format of Discussing AE with PI by Demographics,” “PI Non-Actionable Response to AE by Demographics,” and “PI Rejection to AE by Demographics.”

Given their dichotomous response pattern, McNemar tests were used for questions “Reporting AE to PI by Demographics” and “PI Actionable Response to AE by Demographics.” A nonparametric bivariate Spearman correlation was used to analyze change scores from the pre- and the six-month follow-up by demographic variables. Mann-Whitney tests were conducted to compare any differences among the six-month follow-up scores when compared by the CRC’s primary responsibility, training background, and length of time involved in clinical research. Effect sizes were also calculated for the Mann-Whitney tests using a formula from Fritz, Morris, and Richler (see full-issue PDF for this formula).{13}

For purposes of data analyses, the “Coordinator Responsibility” variable was collapsed into two groups, where Group 1 consisted of those who self-identified as “Coordinator/Investigator/Other.” Group 2 consisted of those who self-identified as “Regulatory Coordinator/Research Compliance.” The “Other” label refers to those who did not fall into any of the other coordinator responsibility options.

Sample questions that participants were asked included “How would you handle a situation where you observe a serious deviation, but the PI does not take any action?” Text responses were analyzed and categorized by two groups: Non-PI Supervisor (including personnel in nursing or clinical team leader) and Institutional (including the IRB and the university’s Clinical Translational Science Institute (CTSI)). Content analysis and frequency counts were reported for each category.

Results

[Editor’s Note: Tables 1 through 3 at the bottom of this article highlight some of these results. These tables are better viewed in the full-issue PDF.]

Of the participants in this study, 72.5% were self-identified as “Investigator/Research Coordinator/Other” group, whereas the “Regulatory Coordinator/Research Compliance” group consisted of 25%. The remaining non-respondent (2.5%) was denoted as missing. Most respondents (72.5%) had 0–9 years of research experience, while 27.5% of respondents reported having 10 or more years of research experience. Of the respondents, 40% hold a bachelor’s degree or below, 45% had a master’s degree or above, while 15% did not respond to this question.

Before and After Training Changes

There were no statistically significant differences seen in CRC reporting AEs, actions taken by the PIs, and how CRCs would handle reporting of AEs differently in the future from the pre-test versus six months after the training. Additionally, there were no statistically significant relationships between the demographic variables and how the deviations were discussed.

Training Performance by Demographics

There was a statistically significant difference (U=90.00, Z=-2.25, p=.024) on the six-month follow-up test scores related to “Bringing AE to PI Attention by Demographics.” The “Investigator/Research Coordinator/Other” group had a higher mean rank (21.29) than the “Regulatory Coordinator/Research Compliance” group (14.50). According to Cohen’s guidelines, the difference indicates a medium effect size (r=-.36).{13} The results indicate that the “Investigator/Research Coordinator/Other” group reported AEs to the PI’s attention more frequently than the “Regulatory Coordinator/Research Compliance” groups.

Second, there was a statistically significant difference (U=71.00, Z=-2.51, p=.012) on the six-month follow-up test scores for “Format of Discussing AE with PI by Demographics.” The “Regulatory Coordinator/Research Compliance” group had a higher mean rank (27.40) than the “Investigator/Research Coordinator/Other” group (17.45). A medium effect size difference between these groups was also observed (r=-.40). The “Regulatory Coordinator/Research Compliance” group reported discussing AEs with the PI verbally, by e-mail, or both more frequently than the “Coordinator/Investigator/Other” groups.

There were no significant differences in the six-month follow-up test scores related to the remaining research questions “Observing AE to PI by Demographics,” “PI Response to AE by Demographics,” “PI Actionable Response to AE by Demographics,” “PI Non-Actionable Response to AE by Demographics,” “PI Rejection to AE by Demographics,” or “CRC future response to PI’s rejection to AEs by Demographics.” Moreover, no significant difference was found in the six-month follow-up test when compared by training background and length of time involved in clinical research.

Regarding the open-ended questions, 40% to 52.5% of the sample responded. In the pre-test, 17 participants indicated that they would report to the PI and 21 mentioned that they would report to others. Representative pre-test comments among those who would report to their PI were:

  • “I would try to approach the conversation from another angle to encourage the PI to understand the severity and how it could impact them and their studies.”
  • “I would meet face to face with PI to make sure I understand their reasoning.”

Representative pre-test comments among those who would report to others were:

  • “I would reach out to the compliance office at the [university].”
  • “I would … assess the risk to the participant. If I determined that the risk to the participant was great and/or immediate I would contact the IRB even through it would probably cost me my”

In the six-month follow-up test, the number of participants who indicated that they would report to the PI or to others was 16 and 21, respectively. Representative comments among those who would report to their PI were:

  • “I would repeatedly bring it to his/her attention.”
  • “I would personally explain to him/her again the seriousness and would also document in e-mail asking him/her for a response.”

Representative comments among those who would report to others were:

  • “If action was not taken, I would report to the division”
  • “I would let him/her know that I will be documenting and filing the response in regulatory binder so that sponsor can assess and discuss the situation with him/her if necessary. If still no action is taken, then I would notify the IRB.”

Discussion

Overall the findings showed that there were no significant differences in frequency of CRC reporting AEs, the actions taken by the PIs, and how CRCs would handle the reporting of the AEs differently in the future. However, statistically significant differences on the six-month follow-up test scores by coordinator primary responsibility were observed:

  1. The “Investigator/Research Coordinator/Other” group reported AEs to the PI’s attention more frequently than the “Regulatory Coordinator/Research Compliance” group.
  2. The “Regulatory Coordinator/Research Compliance” group reported an increase in discussing AEs with the PI more often than the “Investigator/Research Coordinator/Other” group.

The differences in reporting by coordinator primary responsibility may represent the distinct roles that each CRC category plays in the process of clinical research. Members of the group bringing AEs to the PI’s attention more frequently have more direct participant contact, and are typically active in a clinic setting where events can take on the immediacy of need. Meanwhile, those involved in the administrative processes of regulatory and compliance activity are more likely removed from the direct experience of participant management. Given their regulatory role and that they likely do not work for the PI, they stand at a distance, and this might have resulted in their feeling freer to discuss AEs with the PI.

The results also show that the online training did not affect participants’ attitudes toward approaching PIs when faced with an AE. That is, the number of those who do not communicate with the PI is relatively the same for both pre-test and the six-month follow-up. Also, among those who do not report to the PI, the number of coordinators is greater than those who do speak with the PI about reporting an AE.

These findings suggest that training did not improve high-level, crucial communication. After observing an AE whereby the PI does not take any action, the number of participants who do not report to the PI remained fairly unchanged. Also, the number of coordinators who discuss reporting AEs with others, such as a non-PI supervisor or institutional personnel, was greater than those who report to the PI. However, overall the number of participants who indicated that they would discuss it with the PI versus others in the pre-test and the six-month follow-up remained relatively unchanged.

As defined by the ECRPTQ Communication Working Group, high-level communication skills entail being capable of crafting clear and effective communications through a variety of mechanisms (e.g., face-to-face, e-mail). This skill includes the ability to define constructive criticism and differentiate between positive or negative feedback, as well as criticism.{14} Communicating at this level requires the capacity to assess conflict in situations and the ability to implement constructive methods of resolution. All study-related correspondence to team members, regulatory officials, and sponsors must be clear, concise, and effective.

Repercussions

The lack of high-level communications skills and inadequate training can lead to poor data integrity and compromise research participant safety (e.g., AE reporting). As CTSAs across the consortium move to implement unique versions of online environments to support standardized task-based training curriculums, there is a risk that online training platforms will miss the essential task of improving CRCs’ vital responsibility in crucial communications and defeat the primary intentions.

The challenge of accurately assessing and evaluating online training and its capacity to promote competency remains. True competency is the ability to translate knowledge into effective action (i.e., report AEs) which is not easily measured by traditional multiple-choice questions proffered in online courses. Given the centrality of assessment in certifying instructional effectiveness, it is important that the evaluation of real-time, task-based learning includes metrics on both individual level learning and systems improvement, as well as a prospective assessment of its impact on the research enterprise at large.

Despite the emergence of a vetted core competency framework for the conduct of clinical research, the move to build online training platforms and populate them with competency content may miss the mark as long as we do not have a common rubric to evaluate all platforms and content.{2} As shown in this study, there remains a clear, unmet need for developing meaningful, standardized metrics and evaluations in the form of rubrics to assess individual training effectiveness and the utility of the various training platforms.

Online platforms can provide a substantial introduction to the clinical research environment and regulations. However, in and of themselves, these platforms are likely insufficient in inducing the crucial communications vital to safe study conduct. The study findings highlight the gap between knowing GCP and the natural, interpersonal interchange of experience—above and beyond the mere collection of cognitive competencies—which actualizes GCP.

Given the prevalence of online GCP training options and the NIH mandate, it is important to understand how CRCs learn and manifest GCP behavior. Previous findings highlight the fact that obtaining competencies cannot be solely achieved through online training, and that current training does not address the psychosocial and communication factors transcending regulatory understandings and conduct of GCP.{15}

A previous study showed participant preference for hybrid learning, which utilized classroom teaching in conjunction with online environments.{16} The interpersonal communication processes that a classroom learning environment fosters are simply not available in an asynchronous scenario. When it comes to promoting crucial communications with PIs on the imperative issue of AE reporting, online teaching simply appears insufficient.

High-level communication skills are essential for CRCs to efficiently report, and discuss AEs up the power gradient, to a PI; however, our findings suggest that online GCP training did not improve high-level awareness in this arena. These skills require a certain competence that comes from an integration of professional confidence with interpersonal values. These values are transmitted via forces of social presence and its impact on self-concept.{16} Social reinforcement plays a crucial role in conveying the values essential to responsible AE reporting.

In contrast to classroom settings, a lack of social presence and real-time interaction in online courses often leads to feelings of isolation and disconnectedness, thus diminishing opportunities for social reinforcement.{17} Sung and Mayer define online social presence as an experience of an individual’s connectedness in a course.{18} Online social presence is the sense of others being present in the same experience that typically occurs via interpersonal interaction.{19} The integration of interpersonal values, which are essential to the role of a CRC, is transmitted via forces of social presence and its impact on self-concept.{20,21}

A sense of presence was largely absent in the online GCP training that we studied. These findings are consistent with a previous study in which CRCs articulated feelings of vulnerability to the PIs and expressed concerns around reporting AEs.{16} Addressing this vulnerability requires competency development that is defined by a professional attitude, knowledge, and skills necessary for a full realization of GCP advocacy of high-level communication skills. This level of skill attainment recognizes, respects, and solidifies the use of constructive methods of resolution.

Perhaps it is these values that elevate GCP competence—they are best represented by self-directed, autonomous individuals who are confident and capable of speaking up to authority (PIs) during crucial communications. Thus, the default GCP online approach is faced with a challenge in terms of how to ensure that working CRCs experience an active social presence, so that training produces self-directed and competent professionals.

Limitations

One limitation of this study is the relatively small sample size, which may decrease the representativeness of the entire population and lead to less accurate results. Non-parametric methods cannot support a strong statistical conclusion with few data. A lack of statistical power, owing to the sample size, might also increases the chance of a type II error. Although there were no statistically significant results related to the demographic variables, they may exist in the population.{22} Use of a larger sample size that would have sufficient power to draw statistical conclusions is recommended in future studies. While this study sought to assess the frequency of CRC reporting of AEs, actions taken by PIs, and how CRCs would handle reporting of AEs, it was not designed promote the soft-skills essential to high-level communication.

Implications of the Findings

Limitations of course learning activities in conjunction with Experiential Learning Theory are apparent, in that the online modules place knowledge and skill acquisition outside a cycle of experiential learning. Online content, as evident in this study, tends to inculcate a learning stance of rote memorization in which mandatory training becomes an exercise in completing multiple-choice test questions. In its current form, the GCP online content does not promote reflective conversation or the sharing of experiences. There was no accommodation in learning activities that fostered abstraction, hypothesis testing, or active experimentation.

Online content should be revised to promote CRC capacity through the use of reflective observation and abstract conceptualization. Use of staggered reflective writings to describe participants’ perceptions over time is also recommended. The findings raise the question whether there is some kind of “cultural censor” that the online format cannot breach. To better understand the potential of this phenomenon, surveying and interviewing PIs is recommended.

Future studies should use bigger sample sizes across the nation’s CTSAs to amass a large database of CRC outcomes and insight about GCP course content and learning activities. By implementing a standardized approach to evaluation, comparative findings from such studies can be used to advance the common metrics movement and to develop a body of institutional and intra-institutional perspectives that showcase GCP coursework outcomes.{23}

Conclusions

This study explored whether online GCP training influenced changes in the frequency of CRC reporting AEs, actions taken by the PIs, and how CRCs would handle the reporting of AEs differently in the future. The findings suggest that the online training platforms examined did not improve crucial communication.

This finding in and of itself is remarkable, and suggests two possibilities. First, it points out that alterations in these behaviors may be resistant to change. Second, it shows that a lack of experiential learning was tied to the absence of essential participant change. Increasing the level of experiential learning in online GCP courses coupled with robust evaluation is recommended.

Disclaimer

Research reported in this publication was supported by The University of Florida Clinical Translational Research Institute which is supported in part by the NIH National Center for Advancing Translational Sciences under award number UL1TR001427. The content is solely the responsibility of the authors, and does not necessarily represent the official views of the NIH.

References

  1. The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH). 2016. ICH Harmonized Guideline: Integrated Addendum To ICH E6(R1): Guideline For Good Clinical Practice E6(R2). www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Efficacy/E6/E6_R2__Step_4_2016_1109.pdf
  2. Sonstein SA, Seltzer J, Li R, Jones CT, Silva H, Daemen E. 2014. Moving from compliance to competency: a harmonized core competency framework for the clinical research professional. Clin Res 28(3):17–23.
  3. Shanely TP, Calvin-Naylor NA, Divecha R, Wartak MM, Blackwell K, Davis JM, et al. 2017. Enhancing clinical research professionals’ training and qualifications (ECRPTQ): recommendations for good clinical practice (GCP) training for investigators and study coordinators. JCTS 1(1):8–15.
  4. Kolb AY, Kolb DA. 2005. Learning styles and learning spaces: enhancing experiential learning in higher education. Acad Manag Learn Edu 4(2):193–212.
  5. Dewey J. 2013. My pedagogic creed. In Curriculum Studies Reader E2. Routledge; 29–35.
  6. Baker AC, Jensen PJ, Kolb DA. 2002. Conversational learning: an experiential approach to knowledge creation. Greenwood Publishing Group.
  7. Mills TM. 1967. The sociology of small groups. Englewood Cliffs: Prentice Hall.
  8. Kolb DA.1984. Experiential learning: experience as the source of learning and development. Englewood Cliffs: Prentice Hall.
  9. Good Clinical Practice (GCP) Course. 2018. CITI Program. https://about.citiprogram.org/en/series/good-clinical-practice-gcp/
  10. National Institutes of Health (NIH). 2016. Policy on Good Clinical Practice Training for NIH Awardees Involved in NIH-funded Clinical Trials (Notice Number: NOT-OD-16-148). https://grants.nih.gov/grants/guide/notice-files/NOT-OD-16-148.html
  11. Site Qualification and Training. 2018. TransCelerate BioPharma, Inc. www.transceleratebiopharmainc.com/assets/site-qualification-and-training/
  12. Association of Clinical Research Professionals. 2018. Introduction to Good Clinical Practice. https://acrpnet.org/courses/introduction-to-good-clinical-practice/
  13. Fritz CO, Morris PE, Richler JJ. 2012. Effect size estimates: current use, calculations, and interpretation. J Exp Psych: General 141(1):2–18. doi:10.1037/a0024338
  14. Michigan Institute for Clinical and Health Research. 2015. ECRPTQ Meeting III -Competency Domain Working Group 8: Communication. Ann Arbor, Mich.
  15. Behar-Horenstein LS, Bajwa W, Kolb HR, Prikhidko A. 2017. A mixed method approach to assessing online dominate GCP training platforms. Clin Res 31(5):38–42. doi:10.14524/CR-17-0013
  16. Behar-Horenstein, LS, Potter JE, Prikhidko A, Swords S, Sonstein S, Kolb HR. 2017. Training impact on novice and experienced research coordinators. Qualitative Report 22(12):3118.
  17. Rovai AP, Wighting MJ. 2005. Feelings of alienation and community among higher education students in a virtual classroom. Internet Higher Ed 8(2):97–110.
  18. Sung E, Mayer RE. 2012. Five facets of social presence in online distance education. Comps Human Beh 28(5):1738–47.
  19. Biocca F, Harms C, Burgoon JK. 2003. Toward a more robust theory and measure of social presence: review and suggested criteria. Presence: Teleops Virt Envs 12(5):456–80.
  20. Zhan Z, Mei H. 2013. Academic self-concept and social presence in face-to-face and online learning: perceptions and effects on students’ learning achievement and satisfaction across environments. Comps Ed 69:131–8.
  21. Bowers J, Kumar P. 2015. Students’ perceptions of teaching and social presence: a comparative analysis of face-to-face and online learning environments. Int J Web-Based Learn Teach Techs 10(1):27–44.
  22. Lomax RG, Hahs-Vaughn DL. 2012. An introduction to statistical concepts. New York, N.Y.: Routledge Academic.
  23. Rubio DM. 2013. Common metrics to assess the efficiency of clinical research. Eval Health Profs 36(4):432–46. doi.org/10.1177/0163278713499586

Linda S. Behar-Horenstein, PhD, (lsbhoren@ufl.edu) is a Distinguished Teaching Scholar and Professor with the Colleges of Dentistry, Education, and Pharmacy and Director of CTSI Educational Development and Evaluation at The University of Florida.

Lissette Tolentino (Ltolen@ufl.edu) is a PhD student in Educational Statistics and Research and a Research Assistant with the College of Education for the CTSI at The University of Florida.

Huan Kuang (huan2015@ufl.edu) is a PhD student in Educational Statistics and Research, a Graduate Instructor with the School of Human Development and Organizational Studies in Education, and a Research Assistant with the College of Education for the CTSI at The University of Florida

Wajeeh Bajwa, PhD, (wajeeh@bajwa.net) is Director of the CTSI Regulatory Knowledge and Support Program at the University of Florida.

H. Robert Kolb, RN, MS, CCRC, (kolbhr@ufl.edu) is Assistant Director of Clinical Research with the Translational Workforce Directorate and a Research Participant Advocate/Consultant for the Regulatory Knowledge, Research Support, and Service Center at The University of Florida.

 

Table 1: Bringing AEs to PI Attention and Format of Discussing AE with PI by Group Affiliation

Questions Regulatory Coordinator/Research Compliance Coordinator/Investigator/Other Mann-Whitney U Z Effect Size r p-Value
N Mean Rank N Mean Rank
Bringing AE to PI Attention 10 14.50 28 21.29 90.00 -2.25 -0.36 .024*
Format of Discussing AE with PI 10 27.40 29 17.45 71.00 -2.51 -0.40 .012*

Notes: * Denotes p ≤ .05

 

Table 2: Descriptive Statistics for the Sample (n=40)

Demographic Information Category n(%)
Coordinator Responsibility Investigator/Research Coordinator/Other 29(72.5%)
Regulatory Coordinator/Research Compliance 10(25%)
Missing 1(2.5%)
Years of Research 0-9 29(72.5%)
10+ 11(27.5)
Missing 0(0%)
Training Background Bachelor’s or Below 16(40%)
Master’s and Above 18(45%)
Missing 6(15%)

 

Table 3: Pretest and Six-Month Follow-Up Comparisons by Research Questions

Statistical Test Research

Questions

Pre Six-Month Follow-Up Z p-Value
N Mean Rank N Mean Rank
Wilcoxon Observing AE to PI 39 14.88 40 12.12 .458 .647
Format of Discussing AE with PI* 40 12.06 40 11.97 1.32 .188
PI Non-Actionable Response to AE 38 6.55 40 12.00 .702 .483
PI Rejection to AE 38 4.50 40 6.50 .525 .599
McNemar Bringing AE to PI Attention 38 39 1.00
PI Actionable Response to AE 38 40 1.00