Who is watching when clinical trial stakeholders—the sponsors, contract research organizations (CROs), and sites involved—are spread out doing simultaneous business on multiple trials across the globe? Who is watching to make sure that every individual clinical trial project is proceeding through its milestones with every possible issue and risk properly identified and responded to?
The assumed answer is usually everyone—as in every stakeholder—and therein lies the problem. Everyone thinks everyone else is watching; however, the truth is, with everyone being overloaded with information, it is not humanly possible to track every issue in real time.
Mostly, issues are reported much later than is ideal, and with little or inadequate action taken.1 Consequently, for example, one unattended query could become the basis for escalated data management discrepancies. Project delays and increases in projected study budget are more the norm than the exception. More important than the consequent increase in cost is the painful loss of timely access to treatment by waiting patients.
A Tale of Two Studies
Without disclosing either study’s profile, we illustrate the philosophy behind “Who is Watching?” by describing the issues underlying delays in two studies. We reviewed the available data and results of interviews with the project team during the January–May period in 2012. Several unattended and unmitigated risks contributed to delays in completion and increased budgets for Study A (Oncology) and Study B (Diabetes) (see Figure 1*). The issues listed below interrelate and impact each other, and many of them are broken out for examination in the sections to come:
- Many unresolved queries2 (Study A alone had nearly 10,000 total queries)
- Insufficient monitoring caused by inadequate processes
- Lack of staff training
- Lack of documented communications among the stakeholders
- Lack of structured handover
- Insufficient vendor oversight
- Insufficient contract oversight and tracking
About 40% of the queries for Study A were repetitive in nature, and about 30% were more than nine months old with no apparent documentation of reasons for the delay in being addressed. Multiple changes of project managers and local country operations teams were the immediate reasons for unresolved queries. Lack of query details in project management and communications documents with the data management vendor also helped to worsen the condition of outstanding queries.
For Studies A and B, contracts failed to adequately establish thresholds or specify how to manage the numbers and kinds of queries. Unresolved queries alone resulted in an increase in change orders that then increased the budget. Further, there were no interim face-to-face meetings or communication between the key clinical trial sites and the sponsor or monitoring staff. Reviewing the communication logs, particularly where minutes were kept, revealed that regular communications were nonexistent.
Both studies lacked sufficient monitoring visits and well-defined processes. Although it is acceptable for long-term studies to have a reduced number of visits, it is essential to have processes to continue to monitor the quality of the data that the studies are generating. Since these were missing, several quality issues were overlooked, including monitoring of critical data points.
Monitoring data deficiencies were mainly due to limited onsite visits. Case report forms (CRFs) that were sent to data management did not undergo source data verification (SDV), resulting in repeated queries.
Other issues were related to delayed completion of CRF (e.g., data entry was delayed for almost one year in some cases, thus query resolution was also delayed). Changes in principal investigators in some institutes occurred without proper handover and documentation. In addition, on the specific datapoint of efficacy, the follow-up data to treatment outcomes were missing; names of concomitant medication were missing and not reconciled; and safety database reconciliation was never performed.
The resulting delayed interim analysis was costly because the needed data for planned conference publications were not available. This postponed product launches and key opinion leader engagement activities.
Lack of Training
Since both were long-running studies in multiple countries in Asia, the Middle East, and Eastern Europe, several regulatory and pharmacovigilance changes occurred. Training related to new regulations, policies, and standard operating procedures (SOPs) was not implemented in real time at several clinical sites in different countries, which resulted in protocol deviations such as missed visit windows, noncompliance to study product intake, and inadequate safety reporting. Correcting these deficiencies required time-consuming and resource-intensive efforts. Lack of a proper training matrix and poor delivery of the existing training were at the root of this issue.3
Lack of Stakeholder Communication
Our analysis of the issues also revealed a lack of proper communication channels to not only address the issues, but to determine their severity. Had there been proper risk monitoring through quality tracking or oversight, the issues could have been identified and resolved in a timely manner.
A robust quality oversight system would have prevented the delays and increased costs experienced during both studies. By applying the spirit and concept behind the U.S. Food and Drug Administration’s (FDA’s) guidance on risk-based monitoring and the related European Medicines Agency’s (EMA’s) reflection paper, the study owner—a major pharmaceutical company—could have alleviated the situation by combining the right technology with trained personnel.
In 2012, the FDA encouraged “sponsors to develop monitoring plans that manage important risks to human subjects and data quality and address the challenges of oversight in part by taking advantage of the innovations in modern clinical trials. The FDA asserts that [RBM] could improve sponsor oversight of clinical investigations.”4 Further, in 2013, the EMA came out with its own views on RBM for quality purposes in clinical trials, in a paper that states the purpose is “to encourage and facilitate the development of a more systematic, prioritized, risk-based approach to quality management of clinical trials, to support the principles of Good Clinical Practice and to complement existing quality practices, requirements and standards. Quality in this context is commonly defined as fitness for purpose. Clinical research is about generating information to support decision making while protecting the safety and rights of participating subjects. The quality of information generated should therefore be sufficient to support good decision making.”5
While the intents of the FDA’s guidance and EMA’s reflection paper are similar, two major issues are noted with the adoption of the tenets of both documents: interpretation and implementation.
Depending on the functional structure of a sponsor’s or CRO’s project team, there is a shift from 100% SDV with frequent face-to-face interaction with a site team to a more targeted and less frequent approach to monitoring visits. The decision-making process on how to adopt the guidance hinges upon first tweaking conventional, proven, and tested processes (re-prioritizing the budgets that come with them), and then to assume that the expected outcome of the tweaked actions will yield the same result. Unfortunately, this is not the case.
Updating processes is not the only key that defines compliance with both the guidance document and the reflection paper. For the paradigm to shift, the mindset must change. Along with this change must come acceptance of the initial increase in cost to leverage new or already available technology.
The cost of change will not be readily visible until a few years down the line. In performing the root cause analysis and identifying the factors that contributed to the increase in cost and delay in completion of the studies mentioned earlier, it became more evident that it was only after the company decided to retrospectively review its processes and identify the gaps that things became more obvious.
The old adage of “learning from one’s mistake” not only resounded clearly, but also highlighted the fact that in today’s current clinical trial management environment, staying ahead and being first to market must take into account changing attitudes, refocusing on standardized training, increasing reliance on utilizing technology-savvy resources,6 and reconfiguring budgets to include (during the start-up phase) technology that can do half the work for people who will be spending more of their time in-house or homebound versus continuing to work as “road warriors.”
Quality Oversight Technology
Finding the technology these days that best suits what project teams need is like differentiating between wheat and rice noodles in a bowl of soup. Technology platforms from different vendors have major similarities in vision, and all promise to track and trend in as close to real time as possible.
Using innovative technology that is the personification of a well-trained, cost-oriented, and independent (human) quality checker in a complicated assembly line, but a hundred times better than any human, is no easy task. The sponsor is the best stakeholder to utilize such technology, since other than the patient, the sponsor is the most impacted by delayed clinical trials and consequent increase in budget allocation.
The ability of the sponsor’s clinical research team to use, at any time of the day and night, their smart phones or tablets to check the status of their studies in real time is to this day still in question. The goal of being able to rely on technology to see the number of queries, or patients enrolled, or risks identified and graded, as well as the cost for each activity and the site’s actual performance remains on the project team’s wish list. To date, project teams are still dependent on reports being spit out by data management or study management systems purchased by their companies, or must rely on Excel spreadsheets as their backup.7
The questions remain: How can innovative treatments be made available faster and improve the trends toward disease management? How do we ensure that both data quality and the means of collection are reliable?
With a new political administration in the U.S. and with forecasts of deregulation in the FDA8 leading to faster cheaper drug development, a system that functions as an independent quality and risk tracker may be needed more than ever to ensure the “no blind spots” mentality. Any risks or quality issues detected compromise clinical trial safety and efficiency without timely responses. Hence, such a system should be configured to be able to track and trend issues in as close to real time as possible. Company and site processes will need to be reviewed and enhanced to adapt to the changing landscape.
There is an opportunity for the FDA to once again focus on its mission of ensuring that patients have access to better drugs faster and at lower cost.9 For years, the agency appeared risk averse, because it is answerable to Congress and the public when risks of adverse side effects from approved drugs become apparent.
FDA’s risk aversion converts to complicated regulations that contribute to delays due to lack of resources at sites to comply with the regulations, or due to failures to fully understand the intents of the regulations on the parts of clinical trials teams. Meanwhile, as the clear victims of side effects are accounted for, those patients who have yet to access or even know of better drugs to improve their lives remain mostly unidentified. Who can quantify the loss of life or diminishment in quality of life due to delays in terms of improved treatments reaching market?
The prevailing cost of a clinical trial program for the development of a single new drug could range from millions to billions of dollars.10 Only the big pharmaceutical companies will long be able to manage this because they have the resource, but even these firms are complaining; their investment must translate to bigger returns, thus costlier drugs.11 The ones who are excluded from the big trials are the small, innovative pharmaceuticals and biotechnology companies, and it is difficult for them to compete due to cost; however, they could be the source of life-improving drugs and devices.
Quality and Risk Oversight Tracking (QROT)
QROT12 is the use of Big Data analytics and patient-centric solutions that are transparent, and that promote integrity with the ultimate goal of bringing medicines swiftly to patients at affordable prices (see Figure 2*). The industry’s tradition of using mostly batch data transfers is not effective in helping improve data-driven decision-making abilities. With QROT, however, tracking quality and risk indicators (including financial data from enterprise solutions) in real time is very valuable in assessing cost and performance of projects any time of day or night in smart devices.
This QROT concept is further exemplified by the Quality Management Institute, which ascribes to having a “Zero Defect Attitude.”13 As applied to the process of QROT, this attitude means that having “pride of workmanship” leads to people doing things right for the client, and delivering as close to what was promised as accurately as humanly possible. Adopting a “Zero Defect Attitude” empowers project teams to focus on using solutions that can diminish possible deficiencies.
Relating back to the case studies presented, the lack of quality oversight resonated from all the deficiencies identified. The underlying thought that followed was the need to avoid having the same problematic issues arise again and again.
The owner of the studies eventually embarked on trying something different; using the outcomes from the root cause analysis as a tool to defend the study budgets and to secure the use of a new technology for quality risk oversight that tracked and trended data as they came in.
Two new, smaller scale studies were launched. This time, the project team used the information from the systems dashboard to closely monitor the progress of the studies. In the process of having a more robust and updated tracking of how the studies were doing, they were also able to enhance their SOPs, propose the continued use of the technology for quality risk oversight, and justify more training for their project team.
After 12 months, these two new studies had clear outcomes in terms of finishing on time and within budget, and more importantly, being available for publication per the targeted date. No more blind spots were noted after adequate tracking had been initiated.
- Getz KA. 2011. Low hanging fruit in the fight against inefficiency. App Clin Trials 20(3). www.appliedclinicaltrialsonline.com/low-hanging-fruit-fight-against-inefficiency
- Medidata Solutions. EDC autoquery rate – a marker for site quality. 2012. App Clin Trials 21(10). www.appliedclinicaltrialsonline.com/edc-autoquery-rate-marker-site-quality
- Robinson M. 2012. The GXP training guidelines: raising standards through competence-based training. App Clin Trials 21(12). www.appliedclinicaltrialsonline.com/gxp-training-guidelines-raising-standards-through-competence-based-training
- U.S. Food and Drug Administration. 2013. Guidance for Industry: Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring. https://www.fda.gov/downloads/Drugs/Guidances/UCM269919.pdf
- European Medicines Agency. 2013. Reflection Paper on Risk Based Quality Management in Clinical Trials. www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/11/WC500155491.pdf
- DiMasi J, Hansen R, Grabowski H. 2003. The price of innovation: new estimates of drug development costs. J Health Econ 22:151–85.
- Henderson L. 2017. The clinical trial of tomorrow. App Clin Trials 26(2). www.appliedclinicaltrialsonline.com/clinical-trial-tomorrow
- Wechsler J. 2016. Biomedical R&D faces new regulatory policies and priorities. App Clin Trials 25(12). www.appliedclinicaltrialsonline.com/biomedical-rd-faces-new-regulatory-policies-and-priorities
- Brennan Z. 2015. Amended ICH GCP guideline addresses evolution of trials landscape. Regulatory Focus. www.raps.org/Regulatory-Focus/News/2015/09/29/23282/Amended-ICH-GCP-Guideline-Addresses-Evolution-of-Trials-Landscape/#sthash.M4A7cZxU.dpuf
- Sertkaya A, Wong HH, Jessup A, Beleche T. 2016. Key cost drivers of pharmaceutical clinical trials in the United States. Clin Trials 13(2):117–26. doi: 10.1177/1740774515625964
- Adams CP, Brantner VV. 2006. Estimating the cost of new drug development: is it really $802 million? Health Aff 25:420–8.
- Mitchel JT, et al. 2014. Three-pronged approach to optimizing trial monitoring. App Clin Trials 23(6):37–44. http://alfresco.ubm-us.net/alfresco_images/pharma/2015/12/18/b2bc0683-9872-4e76-82bc-e5a5838028b2/2014-06ACT.pdf
- Quality Management Institute. Executive Bulletin Archive. Q&A from QM Nation on Zero Defects Attitude©. http://qualitymanagementinstitute.com/ebarchive/EB111.aspx
Nadina Jose, MD, (email@example.com) is president and founder of Anidan Group Pte Ltd. in Singapore.
*To see tables and/or figures associated with this article, please access the October 2017 full-issue PDF.