Joint Statistical Meetings 2010 Retrospective: A CRO Statistician’s Perspective

Histogram of sepal widths for Iris versicolor ...
Image via Wikipedia

The Joint Statistical Meetings is often a good place to learn about emerging issues in the analysis of clinical trials (and all of the other fields that statistics touches, as a matter of fact). Here are just a few of the topics that stood out in my mind:

  • Adaptive trials: of course, adaptive trials is a big topic and will continue to get bigger due in part to the FDA’s draft guidance on the topic. Sessions focused on the use of Bayesian methods in clinical trials, including the use of Bayesian methods to use partial subject information in deciding when to stop trial accrual. Other sessions focused on how to avoid common pitfalls in adaptive trials.
  • Statistical graphics have received a lot more use in the analysis of clinical safety data and usually reduce the time to discover important patterns in the safety data, such as those that might indicate liver injury. There is a process in place to build a library of useful graphs for safety analysis, along with SAS and R code to create them. (Yes, the FDA accepts graphs created with R!)
  • Personalized medicine is a hot topic, but we are not quite ready for it. An intermediate step is patient population segmentation, or tailored therapeutics. This is essentially determining the subgroups for which a drug is beneficial. While there is some regulatory development that needs to occur in parallel (REMS comes to mind), statistical methods are currently being developed to elucidate subgroups that can most benefit from a drug. This is a welcome break from the all-or-nothing thinking that has often been employed in drug development (and criticisms of drug makers), and an area to watch in the next few years.
  • While most people don’t think about this as a biostatistical topic, I think that functional data analysis will eventually become a useful method. It is already used in analysis of medical images (we were given a presentation where these methods were used to characterize brain shape changes of lead-exposed workers) and the impact of fish oil on carcinogenesis, but I think it can be applied to more conventional clinical data as well. The advantage of the methodology is that it examines the trajectory of a measure throughout the whole follow up period, rather than at an endpoint. For example, instead of examining alanine aminotransferase (ALT) measurements at discrete time points the methodology enables the researcher to analyze the subjects’ ALT as a whole curve. This enables more direct answer to questions such as “Under what conditions might liver injury occur with this medication?”
  • Missing data in clinical trials is a major issue in clinical trials. Missing data costs a lot of time in money and has the potential to cause an otherwise useful clinical trial to fail. Our methods of handling this issue, such as last observation carried forward (LOCF) or ignoring the issue entirely, are crude and often generate misleading results. Recently, the FDA asked the National Academy of Sciences to examine the issue, and a preliminary report has been issued. Eventually, this report will lead to a draft guidance, but right now everybody is trying to make sense of it. Even so, it is worth a read. Among the major principles:
    • Prespecify the method for handling missing data, preferably in the protocol.
    • Do not use LOCF or any “single imputation” methods that replace missing data with one value. This leads to underestimates of standard errors, and that’s bad. Modern methods, such as multiple imputation or likelihood, are preferred.
    • Perform a sensitivity analysis by using another method or even perhaps assuming a more conservative model for missing data. Missing not at random methods are useful here. Unfortunately, the interpretation of sensitivity analyses is still an open question.
    • Distinguish between dropouts and missed visits in the analysis.
    • Design the study to minimize missing data. For example, schedule follow ups to subjects who drop out, even if they do not receive any more treatment.

The statisticians at Cato Research are engaged in active research and implementation of many of these ideas, and are experienced in discussing them with regulatory authorities. Future blog posts will discuss these issues in more detail.

This is a post by John Johnson, Ph.D. John is a Senior Biostatistician and the Associate Director, Statistics at Cato Research.

This entry was posted in Conferences and Meetings, Statistics and tagged , , , , , , , , , , . Bookmark the permalink.

One Response to Joint Statistical Meetings 2010 Retrospective: A CRO Statistician’s Perspective

  1. Pingback: Another Joint Statistical Meetings retrospective (A CRO statistician’s perspective) « Random John