Clinical Trial Software Testing – Healthcare Application Testing
What is Clinical Trial Data Management (CTDM)?
Clinical Trial data management (CTDM) is a critical phase in clinical research, which leads to generation of high-quality, reliable, and statistically sound data from clinical trials. Clinical data management ensures collection, integration, and availability of data at appropriate quality and cost. It also supports the conduct, management, and analysis of studies across the spectrum of clinical research as defined by the National Institutes of Health (NIH). The ultimate goal of CDM is to ensure that conclusions drawn from research are well supported by the data. Achieving this goal protects public health and confidence in marketed therapeutics. – From Wikipedia.
Because all drug makers, medical device manufacturers, and biotech companies must comply with the FDA, HIPAA … regulations, the companies need to record, manage, store, and provide easy access to documentation of every phase of the trial process. To reduce this time intensive and paperwork method, companies use Clinical Trial Data Management Systems to track study performance, conduct data reports and inform decision-making.
That why clinical trial software available in the market nowadays are designed to help ensure both the quality of the data from the study and the safety of patients participating in the study.
Why does Clinical Trial Software need Testing?
From a business perspective, we have to understand how the FDA, other regulators, and business partners evaluate the worth of the product; all it comes from the data integrity in the system. From an ethical standpoint, clinical data affects patient treatment decisions, which will affect patient health. For those reasons, the clinical data quality and integrity are critical.
Testing is an essential step in preparing a clinical research study to go live. Besides, testing helps to ensure the data quality and reliability for improving treatments. The testing is to confirm the behavior of the study software, data logging, functionality, workflow… to reveal any flaw in the software.
The testing should check for the following aspects:
- The test of software to ensure that logical business flow, the data logging comply with applicable regulations and guidance such as the protocols, Standard Operating Procedures (SOPs) and Good Clinical Practices (GCPs).
- The suite of tests should test every assessment data item to ensure that the data collected in the assessments are logged accurately and consistently.
- The suite of tests should review of all branching logic in the Business Flow, formatting, text and spelling…
- This suite of tests should validate the translated versions of the device software developed for each software configuration. Pay attention to language-specific logic testing.
- Each test should provide documented evidence that the systems function as specified.
Clinical Trial Software Testing for regulatory compliance
When conducting clinical trial software testing, there are many regulations in the healthcare industry such as FDA, HIPAA, FMLA, ADA, etc. that the QA must be aware of. These regulations are always a part of any requirements, even if not explicitly stated. So, we have to ensure that the test strategy and test plan accommodates them.
For example, we are talking about a patient whose information is being transmitted over the internet, or it is stored in the medical device. Just think of the security aspects that are covering this information. So we need to have the regulatory as well as the compliance parties that will support this personal health information (PHI) aspect.
So, we understand that protecting patient data and health information is an utmost priority for health regulatory bodies. The testing should be done in compliance with such regulatory bodies. In this article, we will be focusing more on FDA and HIPAA compliance.
What is FDA? The FDA is Food and Drug Administration; it is a US federal agency that is responsible for regulating the drugs, biological products, and medical devices. In the software testing, we will be concentrating more onto the FDA verification and validation when we talk about the software aspect of the medical device, it applies to the four categories of the healthcare software.
- Component of Medical Device: a software becomes a part, component or accessory of a medical device. Example: you have a blood glucose-monitoring device and that blood glucose monitoring device is communicating with your mobile application. So that mobile application will become the component of your medical device.
- Medical Device: The medical device in itself. For your blood glucose device monitoring, there is a software install inside. So, that will also come under FDA verification and validation.
- Production of Medical Device: The software that will be used in the production of the medical device which could be a programmable controller or logic system.
- Implementation of Quality System: The quality management system that will be implemented at the device manufacturer’s unit.
Since the Code of Federal Regulations (CFR) Part 11, the FDA accepts electronic records and signatures for compliance. The regulations set the ground rules for the technology systems that manage information used by organizations subject to FDA regulation. CFR Part 11 sets security requirements to ensure that electronic records and electronic signatures are trustworthy, reliable, and compatible with FDA procedures, as well as verifiable and traceable in comparison to paper records and traditional handwritten signatures.
CFR 21 Part 11 requires that electronic signatures come with a detailed history of the document— an audit trail. Here are some of crucial requirements for electronic signatures:
- The printed name of the signer, the date/time the signature was applied, and the meaning of the electronic signature;
- Human readable form of the record and not separable from their record;
- An unique user/Patient/Subject ID;
- Utilize biometrics to identify the user or use at least two distinct identifiers (ex: the user ID and password);
- Passwords checked periodically and will expire; and
- Loss management procedures exist to reauthorize lost, stolen, or missing passwords.
What is HIPAA? HIPAA stands for Health Insurance Portability and Accountability Act (1996). It is a set of generally accepted security standards and requirements that are present for protecting the public – or for protecting the health information. HIPAA has two major components
1. Health insurance coverage is protected for workers and their families when they change or lose their jobs
2. National standards are established for the Security and Privacy of private health data while allowing the flow of health information needed to provide and promote high-quality health care and to protect the public’s health and well-being.
Security and Privacy of private health data are the main concerns for healthcare software testing and are applicable for all healthcare applications; we’ll be concentrating only on component No. 2 in this article. To test your product for HIPAA compliance requires a thorough understanding of the document to ensure that test cases fully cover all parts of the regulations applicable to the product. HIPAA compliance testing focuses on four following areas:
- First, User Authentication and Authorization: Authentication that means a user who is authorized to log into the healthcare software, only that user can log in. Authorization that means authorizing access to information is based on user role and patient limitations, example: a user has logged into it, but there would be some information; only that information should be visible to the user. So in user authorization, we should check for only particular necessary information visible to the user.
- Second, Audit log – the standard requires detailed activity logging on a server. We should assure that activity logs record all the activities within the application with a particular focus on attempts to access PHI. Logs should also provide information on the user’s activity when he or she accesses PHI, i.e., a detailed description of changes made, information added, etc. Thus, we should test activity logs for different types of users attempting to access PHI.
- Third, Data transfers. Ensure data encryption at all transfer points according to ANSI 5010. So these are some of the testings that we can apply to the healthcare software.
- The last is help Information – Help information on the correct and incorrect uses of data.
Clinical Trial Software Testing Life Cycle
In software testing, we will be concentrating more on FDA verification and validation when we talk about the software testing of a healthcare application. So, in this article, we look at the broad landscape of software V&V and will help you design and document a test procedure for each new study, allowing you to focus on the items likely to cause problems and decrease time spent on non-critical path items. This documented V&V procedure will provide you with:
- Assurance that critical functions have been documented and tested.
- Documentation of the database testing which to verify database access methods and processes function correctly and without data corruption.
- Documentation of the testing plan (Test Plan outlines the scope, process, and testing strategy to be followed during the testing process), the testing done, and the results of that testing.
Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase. Software verification looks for consistency, completeness, and correctness of the software and its supporting documentation, as it is being developed, and provides support for a subsequent conclusion that software is validated. Software testing is one of many verification activities intended to confirm that software development output meets its input requirements. Other verification activities include various static and dynamic analyses, code and document inspections, walkthroughs, and other techniques.
Software validation is a part of the design validation for a finished device but is not separately defined in the Quality System regulation. For purposes of this guidance, FDA considers software validation to be “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fulfilled.”
MeU considers testing to be a verification activity that contributes to the validation of a product release. For this reason, testing may be referred to as both verification and validation.
During testing, one or more system components are tested under controlled conditions and results are observed and recorded. Test cases are developed, formally documented, and executed to demonstrate that the system has been installed and is operating and performing satisfactorily. These test cases are based on the User Requirements and the Functional and Design Specifications for the system.
To ensure that there is a clear link between the test scripts and the Requirements and Specifications, a Traceability Matrix should be generated. The rigor of the traceability activities should also be based on a risk assessment
It is critical to developing a test plan (including testing strategy and timeline) during implementation planning to ensure you have sufficient time and resources built into the project plan. Although you may not be sure about the detailed schedule during initial planning, you can allot dedicated time up front for each testing phase and rounds within phases. Testing phases can include functional testing, regression testing, integrated testing, user acceptance testing, performance testing, and security testing… By blocking off time for each of the testing phases, you can more easily slide them into the overall timeline as the project moves forward. Allocating specific time for test case development is also essential.
Testing may be manual or automated. The content and results of manual tests are contained in the Product Tracking System (as test cases and results). The content of automated tests is contained in the Source Code Control System (as test cases), and results are contained in the test case management (as test cases and results).
This part we will describe the types of testing performed by our tester to validate a product release. The types of testing are:
- Functional testing
- Usability testing
- Performance testing: Volume testing, Load testing, Stress testing
- Data integrity testing
- Security testing
- Failover and recovery testing
- Configuration testing
- Regulatory requirements testing
- Regression testing
- Integration testing
- User Acceptance Testing
|Test objective:||Verify proper target-of-test functionality, including navigation, data entry, processing, and retrieval.|
|Technique:||Execute each use case, use case flow, or function, using valid and invalid data, to verify the following: That the expected results occur when valid data is usedThat the appropriate error/warning messages are displayed when invalid data is usedThat each business rule is applied correctly|
|Test objective:||Verify the following: That navigation through the target-of-test properly reflects business functions and requirements, including window-to-window, field-to-field, and use of access methods (tab keys, mouse movements, accelerator keys)That window objects and characteristics, such as menus, size, position, state, and focus, conform to standardsThat equivalent appearance and functionality (“look and feel”) is maintained across all supported platform versions|
|Technique:||Create or modify tests for each window to verify proper navigation and object states for each application window and object.|
|Test objective:||Verify that the target-of-test successfully functions under the following high volume scenarios: The maximum (actual or physically capable) number of clients connected (or simulated), all performing the same, worst case (performance) business function for an extended periodMultiple queries/report transactions being executed simultaneously after the maximum database size has been reached|
|Technique:||Use tests developed for load testing. Use multiple clients running the same tests or complementary tests to produce the worst-case transaction volume/mix (see stress test above) for an extended period.Create the maximum database size (actual, scaled, or filled with representative data) and use multiple clients to run queries/report transactions simultaneously for extended periods.|
|Test objective:||Verify page load times for designated transactions or business cases under varying workload conditions.|
|Technique:||Use tests developed for functional testing.Modify data files (to increase the number of transactions) or cases to increase the number of times each transaction occurs.|
|Test objective:||Verify that the target-of-test functions properly and without error under the following stress conditions: Little or no memory available on the serverThe maximum (actual or physically capable) number of clients connected (or simulated)Multiple users performing the same transactions against the same data/sitesThe worst case transaction volume/mix|
|Technique:||Use tests developed for load testing.To test limited resources, run tests on a single machine and reduce or limit the RAM on the server.For remaining stress tests, use multiple clients running the same tests or complementary tests to produce the worst case transaction volume/mix.|
DATA INTEGRITY TESTING
|Test objective:||Verify that database access methods and processes function correctly and without data corruption.|
|Technique:||Invoke database access methods and processes, seeding any applicable methods/procedures with valid and invalid data or requests for data.Inspect the database to verify that the data has been populated as intended, and all database events have occurred properly, or review the returned data to confirm that the correct data was retrieved for the correct reasons.|
|Test objective:||Application-level security testing – verifies that users are restricted to specific functions or are limited to certain dataSystem-level security testing – verifies that only those users granted access to the system are capable of accessing the applications, and only through the appropriate gateways|
|Technique:||Application-level security: Identify and list each user type/role and the functions/data that each type/role has permissions for.Create tests for each user type and verify each permission by creating transactions specific to each user.Modify the user type and re-run tests for the same users. In each case, verify that additional functions/data are correctly available or denied.|
FAILOVER AND RECOVERY TESTING
|Test objective:||Verify that recovery processes (manual or automated) properly restore the database, applications, and system to a known, desired state. Conditions to test include, but are not limited to, the following: Power interruption to the serverCommunication interruption via network server(s)Invalid database pointers/keysInvalid/corrupted data element(s) in the database|
|Technique:||Tests created for functional testing should be used to create a series of reports for submission. Once the desired starting test point is reached, the following actions should be performed (or simulated) individually: Power interruption to the server: Simulate or initiate the power down procedures for the server.Interruption via network servers: Simulate or initiate communication losses with the network. (Physically disconnect communication wires or power down the network server(s)/routers.)|
|Test objective:||Verify that the target-of-test functions properly on the required hardware/software configurations.|
|Technique:||Use functional test cases.Open/close various non-target-of-test software, such as Microsoft Excel or Word as part of the test or prior to the start of the test.Execute selected transactions to simulate users interacting with the target-of-test and the non-target-of-test software.Repeat the above process, minimizing the available conventional memory on the client.|
REGULATORY REQUIREMENTS TESTING
Verifies that the target-of-test complies with all applicable regulatory requirements.
Verifies that changes to the target-of-test have not caused unintended problems.
|Test objective:||A regression determination must be made whenever changes are made to a product. QA will determine areas of code which may be impacted by the changes identified for a product release. Areas that are not impacted may also be selected for regression testing because they are high risk or high use.|
|Technique:||The regression testing selected for a release is documented in the release’s Test Plan, and is determined by the following factors: The functional area or areas impacted by the changes to the productThe familiarity of the developers with the code they are changingThe extent of the changes being madeThe risk or amount of use associated with certain features of the product, regardless of whether changes were made to those features|
Verifies the integration of the target-of-test with all applicable interoperability products
USER ACCEPTANCE TESTING
This formal testing constitutes the final opportunity to ensure the functionality of the product works as specified. The users perform it as the last phase of the software testing process. During this phase, actual users test the software to make sure it can handle required tasks in real-world scenarios, according to the requirements and the business process and associated procedures.
Clinical Trial Software Testing process with MeU includes the following steps:
- Analyzing project requirements. Our testing team studies the project requirements.
- QA completes the Test Plan, which describes the validation and testing activities to be performed during the release. The Test Plan should be completed and signed during the first iteration of a release, and all testing should be executed in accordance with the plan, with the understanding that deviations must be noted in the Test Summary. Test Plan is managed in a test management tool.
- QA creates or updates tests, based on the Test Plan and the requirements for the release. Test cases are managed in a test management tool.
- Peer reviews each new or updated test. The review may comprise a review of the test case (manual or automated) or a code review of the test script (automated only) and must be conducted by someone other than the author of the test.
- QA performs the tests, records the test results, and retains any objective evidence. For automated testing, these actions may be done automatically.
- If bugs are found, they are recorded in the Bug Tracking System tool (Jira or Bugzilla), associated with the test case that found them. At this step identifying and accessing all risks, including unresolved bugs, critical risk areas, and the risk that may affect the project implementation.
- If a test fails, it must be re-executed after the bug that caused the failure is resolved. If a particular input caused the failure, that input must be used during re-execution OR during the verification of the bug (by the re-execution of steps described in the bug).
- QA reviews the test results. Each result must be reviewed by a person other than the person who performed the test. This review should inform the conclusion in the Test Summary.
- When testing is complete, QA completes the following documentation:
- Traceability Matrix, showing the user stories for the release, and the functional specifications and test cases associated with those user stories.
- Test Summary, summarizing the testing that was done, the results of the testing, and the conclusions made based on the testing.
- Results log, showing all test scripts for the release, and their results. Required for automated testing only.
The Detailed Test Case Management Process
For this article, we had conveyed that compliance plays a very, very important aspect when we talk about testing in the healthcare industry. With our testing process and approach, help the healthcare software companies to accelerate the testing cycle and help to focus more on the core business rather than the testing business. Verification and validation of the software are taken care of using this process, the testing becomes very effective and efficient, and we can achieve quality by testing in a short amount of time.