Computer System Validation—A Risk-Based System Lifecycle Approach

Clinical Researcher—June 2018 (Volume 32, Issue 6)

ICH IN FOCUS

Denise Botto, BS; Michael Rutherford, MS

[DOI: 10.14524/CR-18-4033]

 

 

In the April 2018 installment of this column (“ICH E6(R2) and Data Integrity: Four Key Principles”), computer system validation (CSV) was briefly discussed as one of the key principles of data integrity. It was emphasized that a one-size-fits-all approach to CSV is not aligned with regulatory expectations; validation should be risk-based, taking into consideration “the intended use of the system and the potential of the system to affect human subject protection and reliability of trial results.”{1} The column also pointed out that the system should not be considered in isolation of the relevant business process—rather, the entire business process and data flow should be considered in the risk assessment.{2}

ICH E6(R2) Section 1.65 describes Validation of Computerized Systems as “a process of establishing and documenting that the specified requirements of a computerized system can be consistently fulfilled from design until decommissioning of the system or transition to a new system.”{1} This timeframe from design until decommissioning is known as the “system lifecycle.” All systems have a lifecycle—a beginning, a middle, and an end.

This column will provide a discussion of the key concepts and industry best practices associated with the risk-based system lifecycle approach to CSV.

Planning Activities

The system lifecycle begins with the planning phase. A formal, planned approach to CSV ensures that quality is built into the system. Two types of plans are generally documented in a CSV effort: a Validation Plan and a Test Plan.

A Validation Plan is written at the start of the validation project to define the overall approach to the validation effort. The Validation Plan documents how the validation will be executed, along with the timelines, key deliverables, and key members of the validation team.

A Test Plan defines the overall approach to the testing effort and the general types of testing that will be performed.

Both types should be risk-based plans, taking into consideration the regulatory impact of the system as well as system novelty and complexity. They should also leverage supplier documentation, where available, to avoid unnecessary duplication.{3}

When applying a risk-based approach, the planning documents can vary significantly. For applications which follow a common process, such as Excel spreadsheets, the validation and test plans can be defined as a procedure outlining the validation requirements and documentation necessary to validate a specific spreadsheet. For more complex systems, such as one for electronic data capture (EDC), the planning activities will need to be defined specifically for each system.

User Requirements and Functional and Design Specifications

User Requirements define what the system must do to meet the intended business function. Requirements should be gathered from the users (i.e., representatives of the business process) and they should be written in such a way that they can be objectively tested.

Functional and Design Specifications define what the system will do to meet the requirements and how the system will function at a technical level. These specifications should also be written to enable objective testing to be performed.

Similar to the planning activities, the User Requirements and the Functional and Design Specifications should be risk-based and leverage any supplier documentation. The system is then built based on the Requirements and Specifications.

Again, using the spreadsheet and EDC examples, the details contained in the requirements specification will vary greatly based on the intended use and complexity of the system. The requirements of the spreadsheet can vary greatly based on the complexity, functionality, and calculations incorporated into, as well as criticality of the results generated from, the spreadsheet.

The EDC system’s complexity and intended use would also drive the level of detail included in that system’s specification documents. The emphasis is on defining and designing the system to meet its intended use.

Testing

During testing, one or more system components are tested under controlled conditions and results are observed and recorded. Test scripts are developed, formally documented, and executed to demonstrate that the system has been installed and is operating and performing satisfactorily. These test scripts are based on the User Requirements and the Functional and Design Specifications for the system.{3}

To ensure that there is a clear link between the test scripts and the Requirements and Specifications, a Traceability Matrix should be generated. The rigor of the traceability activities should also be based on a risk assessment.

Testing is required for an effective performance of a software application or product. It is important to note that testing is not equal to validation. Validation is more than testing, although testing is a fundamental piece of the validation process.

There are four levels of testing for software: unit (or code) testing; integration/module testing; system testing; and customer acceptance testing, generally known as user acceptance testing or performance qualification.

Unit testing is performed by the software developer as part of the software development process. It is a level of software testing where individual units/components of software are tested. It is important that testing be performed during the development of a system in order to potentially eliminate errors early in the process.

Integration/module testing is one of the most critical aspects of the software development process, as it involves individual elements of software code (and hardware, where applicable) being combined and tested until the entire system has been integrated. Errors found at the integration testing phase are less expensive to correct than errors found at a later stage of testing

System testing is a level of software testing where complete and integrated software is tested. The purpose of this test is to evaluate the system’s compliance with the specified requirements.

User acceptance testing is then performed by the users as the last phase of the software testing process. During such testing, actual users test the software to make sure it can handle required tasks in real-world scenarios, according to the requirements and the business process and associated procedures.

There are instances where not all levels of testing are performed. The type and complexity of the testing executed is based on the use and criticality of the system. For a simple spreadsheet, the testing may be limited to user acceptance testing.

As the complexity of the spreadsheet increased, so too would the level of required testing. If the spreadsheet was actually a workbook (a collection of spreadsheets within a single file) leveraging data from multiple spreadsheets, the need for unit/integration testing would increase. For an EDC system, the need for all levels of testing would be more likely due to the typical complexity and configuration of such systems.

Change Management

Once the testing is complete and the system has been released into production, the validation effort is not over; the system needs to be maintained in a validated state throughout its lifecycle through decommissioning or transition to a new system.

Unfortunately, sometimes a “bug” in the system is discovered. This would require a software patch to be installed. Or perhaps a new user or functional requirement is necessary to be implemented, which could require a system upgrade.

All of these updates should be performed under Change Management, in order to maintain the validated state of the system and ensure all changes are tracked and documented. This would include infrastructure, software changes, layered software, database changes, and changes to network and workstations. Any changes to the validated computerized system should be reviewed and authorized by user representatives, and the changes should be tested based on the risk assessment.

Managing the change is essential, no matter what type of system it is. What will vary are the change management activities and rigor associated with the given change, based on the impact of that change. For a simple spreadsheet, implementing the change might be as simple as updating the spreadsheet and re-executing the validation process. For more complex systems such as an EDC, implementing the change will include updates to documentation and execution of appropriate testing to ensure the change works as intended, and that the change did not cause an unintended impact to other functionality within the system.

Conclusion

Computer system validation is an essential process for ensuring, as well as documenting, that a computerized system does what it is designed to do—consistently and reproducibly. It is not measured by the number or size of documents or deliverables that are produced, but is most effective and efficient when properly planned and executed using a risk-based approach that focuses on human subject protection and reliability of trial results. It should also take into account system novelty and complexity and leverage supplier documentation, where available, to avoid unnecessary duplication.

References

  1. International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use. ICH Harmonized Guideline. Integrated Addendum to ICH E6(R1): Guideline for Good Clinical Practice E6(R2), Current Step 4 version (November 9, 2016).
  2. Medicines and Healthcare Products Regulatory Agency (MHRA). ‘GxP’ Data Integrity Guidance and Definitions, Revision 1 (March 2018).
  3. ISPE GAMP® 5 Guide: A Risk Based Approach to Compliant GxP Computerized Systems. International Society for Pharmaceutical Engineering (ISPE) (April 2008).
  4. Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme PIC/S Guidance: PI 011-3 Good Practices for Computerised Systems in Regulated “GXP” Environments (25 September 2007).

Denise Botto, BS, (denise.botto@syneoshealth.com) is Associate Director of Computer Systems Quality for Syneos Health.

 

Michael Rutherford, MS, (michael.rutherford@syneoshealth.com) is Executive Director of Computer Systems Quality and Data Integrity for Syneos Health.