«

»

Oct 07 2010

Improving Assessment Data Consistency in Multi-Center Studies

Ensuring assessments are conducted, analyzed, and reported consistently across sites is a challenge facing managers of multi-center clinical studies.   While identification and quantification of between site variation is typically part of the statistical analysis, collecting more consistent data from the outset leads to cleaner results.  Although an issue for all clinical studies, the impact is greater on smaller, Phase II studies.  With fewer subjects, any significant variation between sites affecting important safety or efficacy measures can adversely impact the evaluation of the study results.  There are several useful strategies that can improve consistency across sites in how study assessments are conducted, analyzed, and reported.

1. Use of a Centralized Service Provider

The use of a single central provider of equipment, analysis, and reports can significantly reduce the variability of assessment results between sites.  When a central service provider is used, all subjects’ assessment results are evaluated under the same standards, and often by the same individual, who is an expert in the field.  For example, clinical safety labs are often processed and reported by a single laboratory that also provides the supplies.  They endure the supplies on site are within their expiry dates, that the normal ranges are consistent across sites, and that the same protocol-specified testing is completed and reported for all study subjects.  Electrocardiographs (ECGs) are also commonly conducted with the assistance of a central provider, who provides identical equipment, supplies, training, and support to all sites.  Results are transmitted electronically for review, and a report of the ECG results is provided back to the site. A central service provider also helps the sponsor ensure that all study data are reported accurately and are available in a timely manner for the final statistical analysis.

2. Evaluation of Site Equipment, Procedures, and Reporting Capabilities

There are instances where the use of a centralized service provider is not feasible—perhaps because of cost, because the required equipment is large or expensive, or there is not an available central service provider.  In this case, the site may be using its own equipment, procedures, and reports.  A careful evaluation of each site’s capabilities prior to the start of the study is critical to determine whether all sites can conduct the assessments and report the results as specified the protocol.  If not, some adjustments to the protocol may be warranted.  Site evaluation visits should include an evaluation of the equipment involved, the method and frequency of calibration, a collection of all applicable standard operating procedures, collection of sample reports that would be produced for the study subjects, and a review of the qualifications and training of the site staff expected to conduct the assessment.  It may be necessary to contract with a more experience specialist or with an area hospital to ensure that experienced staff conducts assessments on the proper equipment and that clearly defined procedures are followed.  Once a thorough evaluation of sites’ capabilities is complete, standard procedures for conducting the assessments and reporting the results should be created and provided to all sites.  These procedures should be detailed and include required equipment capabilities, calibration frequency, sample/normal subject testing (if applicable), equipment settings, standards/normal ranges, testing procedure (including number of repetitions), and data reporting requirements (e.g., data content, units, format).  Clearly defined procedures provide to the sites at the start of the study will ensure consistent procedures are followed at all sites and the data is reported in a consistent manner that reflects the protocol.

3. Expert Review Subject Reports

If sites are using their own equipment and reporting standards, a review by an expert in the field can be very useful.  The study’s medical monitor may be able to serve in this capacity or it may be appropriate to seek the services of an outside specialist.  On a recent study that included pulmonary function assessments, an expert over reader was vital to ensuring that all sites conducted the assessments per American Thoracic Society standards.  The expert reviewed subject reports and was able to identify some procedural problems at one site that were impacting assessment results.  Additional training and consultation with the site’s respiratory therapist corrected the problems.

4. Provide Equipment and Training to the Sites

A sponsor may elect to lease or purchase the same equipment for all study sites.  Combined with a thorough training session on the equipment’s use and the required assessment procedures, this approach can be more cost-effective than using of a central service provider.  As new site staff members become involved in the study, or if a site experiences a lengthy enrollment lull, follow up training may be necessary to maintain data quality.

5. Get the Investigators Involved

A brainstorming session with key investigators to discuss assessment procedures, potential data variability, and solutions to maintain consistency has two significant effects:

  1. It provides the sponsor with an opportunity to identify potential data issues not previously recognized
  2. It encourages investigators to increase their oversight of potentially problematic assessments at their sites.

An involved Investigator is always a benefit, and can be critical to gaining the cooperation of the directors of core laboratories at larger institutions.

This is a post by Lisa Sanders, Ph.D.  Lisa is a Clinical Strategy Scientist at Cato Research.