The incredibly complex case of missing medical research
Editor’s Note: Since the publication of this article, Ben Goldacre (AllTrials) and Anna Powell-Smith of the Evidence-Based Medicine Data Lab at the University of Oxford built a tool (Trials Tracker), which shows the universities, government bodies, and pharmaceutical companies that are and are not sharing their clinical trial results. Their data was last updated in October 2016.
The case of missing clinical trials would surely vex Sherlock Holmes: How to figure out the number of trials that have never been reported, if their existence—like a birth or a death—has never been registered?
One approach is to start with trials that we know exist: those that have been recorded in a registry. The Food and Drug Administration Amendments Act (FDAAA) of 2007 requires that all clinical trials conducted in the U.S., and all trials for drugs that are manufactured in the U.S. but shipped elsewhere, be registered at ClinicalTrials.gov, an online repository that is part of the National Library of Medicine. The researchers must then report their results within a year of completing the trial (unless there is a legally acceptable reason for a further delay). The penalties for not doing so are fines of up to $10,000 per day.
By comparing the number of trials registered at ClinicalTrials.gov with the number of trials that report results we can get a sense of what’s missing—or at least the known unknowns.
This is what a team of researchers led by Dr. Monique L. Anderson did in a study published in the New England Journal of Medicine in 2015 (among the authors, Dr. Robert Califf, recently confirmed as FDA deputy commissioner). Anderson et al.  looked at 13,327 clinical trials that were either completed (or terminated before completion) between January 1, 2008 and August 31, 2012, and were “highly likely” to fall under the FDAAA rules. The authors found that 13 percent (1,790 trials) had reported results within 12 months of completion, as required by law; this rose to 38 percent (5,110 trials) when the reporting period was extended to five years.
In other words, of the 13,327 registered trials completed between January 1, 2008 and August 31, 2012 that likely fell under the FDAAA reporting requirements, the results of 62 percent were missing as of August 31, 2012.  As the authors summed up this situation, everyone who conducts clinical trials—industry, the National Institutes of Health (NIH), government and academia—“performed poorly with respect to ethical obligations for transparency.”
How many of these missing study results have been reported since the Anderson et al. study was completed? It’s impossible to say without redoing the study.
What we can say is that close to two-thirds of clinical trials that we know took place did not report results within five years of completion (at least within the U.S.). We have a clear measure of what’s missing for a specified moment in time. This may be the most precise we can be in answering our question.
Other than Anderson et al., there is one other study that attempts to capture the breadth of missing information from the world’s largest registry. On May 20, 2010, Dr. Tatayana Shamliyan identified 31,161 trials on the ClinicalTrials.gov database that were required to post results, some going back all the way to 1992 (which is to say that they were retrospectively registered after ClinicalTrials.gov was launched in 1999). As of May 20, 2010, only 4.5 percent had results available. 
Why not use this study to claim that as of 2010, 95 percent of trials on the world’s largest registry had not reported results? The charitable reason is that the study covers a time period when reporting results initially left to the discretion of researchers, and only gradually came to be seen as something that should—and then must—be done.
The Shamliyan study does not reveal the trends in reporting results over time as influenced by four key events: The 1997 U.S. Food and Drug Administration Modernization Act, which required the creation of a registry of drug trials—ClinicalTrials.gov—for serious or life-threatening conditions; the National Institutes of Health request in 2000 that trials be voluntarily registered at ClinicalTrials.gov; the International Committee of Medical Journal Editors (ICMJE) 2004 decision to require trial registration (prior to enrolling any patients) from 2005 onwards as a condition for publishing a clinical trial in a medical journal; and, as mentioned, the FDAAA act of 2007, which made registration and reporting of results mandatory for any clinical trial that is run within the US and involves a new drug or device.
In other words, while the figure of 4.5 percent is low because of historical non-compliance, we can see from the Anderson study somewhat better reporting in recent years. Nevertheless, the Shamliyan study gives us the best frame for a historical view of what’s missing. Were these studies to be updated, we would have a clearer sense of the trends in compliance and a better understanding of how many results have never been reported.
It is worth remembering that this is not simply a problem of inconvenient paperwork or excess bureaucracy: clinicians and researchers have long been reminded that clinical trials—and the risks they pose to patients—can only be ethically justified if they contribute to the general progress of medical knowledge, which is to say, their results are reported. 
Smaller studies, similar problems
Besides the Anderson and Shamliyan studies, there are several other smaller scale studies looking at the problem that are worth noting.
Prayle et al., , for example, looked at reporting rates for studies on drugs approved by the FDA that were completed between January 1 and December 31, 2009, in order to assess compliance with the FDAAA. Of the 738 trials subject to mandatory reporting identified by the authors, only 22 percent (163 trials) had reported results within a year of completion.
Gopal et al., which looked at the impact of the 2007 FDAAA act on reporting results in ClinicalTrials.gov found that the percentage of trials reported went from 6.8 percent in 2006-2007 (75 trials) to 19.1 percent in 2007-2008 (427 trials) and then to 10.8 percent in 2008-2009 (316 trials). The authors also looked to see whether these 818 trials had been published as academic journal articles and found that over the three time periods, publication went from 60 percent to 33 to 20 percent .
Another study led by Dr. Tatyana Shamliyan found poor compliance in registering trials. Shamliyan et al.  combed a grant-reporting database to find trials funded by the National Institutes of Health involving children between 2000 and 2010. Out of 1,571 studies found, 160 were randomly  selected to see if they were registered at ClinicalTrials.gov and published in the medical literature; the authors found that just 33 percent (52 studies) had been registered, and 53 percent had been published (the grant-reporting database did not provide enough detail to determine whether these studies were subject to FDAAA reporting mandates for their results, hence the absence of this metric from the study).
There are other studies that take a different approach to estimating missing trial numbers (by, say, comparing different registries of trials against publication searches for specific drugs) but they do so within the limitations of relatively small sample sizes—in the low-to-mid hundreds of clinical trials. For example, a 2007 study of trials approved in 1998 and conducted in part at a large academic medical center found that 44 percent had never published results. But the sample size was 197 .
Two studies provide strong evidence that trial sponsors are responding positively to mandates on reporting trial results—but both studies were funded by the pharmaceutical industry. This doesn’t necessarily mean that they are wrong or should be dismissed; it simply raises the question as to whether they measured the issue in a way that minimized possible criticism.
The studies by Rawal et al., looked at the timeliness of results reporting from pharmaceutical industry-sponsored trials on new medicines approved by the European Medicines Agency (EMA). For 2012, the authors found that 90 percent of trials (307/340) had reported results within 12 months . An earlier study looking at trials for new medicines approved by the EMA between 2009 and 2011 found that 77 percent (619/882) of trials had reported results within 12 months and 89 percent by 24 months .
While these developments look positive, they are complicated by the results of a recent study  looking at the registration, reporting, and publication of 318 trials on 15 drugs from 10 industry sponsors approved by the FDA in 2012. The authors, Miller et al., used a variety of sources beyond registries to compile a list of trials for each of the drugs, and then sorted the list according to whether these trials fell under FDAAA mandates. The authors found that most trials were registered, with a few notable exceptions; but timely reporting varied dramatically depending on the drug and the industry sponsor: from zero to 100 percent (see table for details).
From these three studies, it is reasonable to conclude that some pharma companies are doing a good (and sometimes excellent) job of reporting results and others aren’t; and if some can achieve excellence, it is possible for all to achieve excellence.
The Anderson and Shamliyan studies provide the best historical picture of the scale of missing information on the results of clinical trials that we know exist. Of course, it would be enormously helpful if both studies were updated to show how much of the historical record has been filled in, but absent this, we can make a reasonable assumption: it is unlikely that missing trial data prior to 2008 is going to be reported now without significant patient-group and academic pressure. Look at how sluggish reporting has been even with legally mandate: why would we assume that time and budget-pressed researchers, regardless of affiliation, are going to revisit old work just because they should? This is the rationale for the AllTrials campaign: only public pressure can secure that information.
And while it is great that we are seeing signs of an increased commitment to compliance among some pharma companies, as Miller et al.’s study shows, making public the results of trials on new medicines does not negate the failure to do so with older medicines that are still in use.
Then there is the infinitely trickier issue hinted at throughout this piece: what about the studies that were never registered or published or discovered through trawling FDA documents? What of the unknown unknowns?
Much like Sherlock Holmes, we start with with observations and patterns: We know from such seminal studies as Dickersin et al.’s “Publication Bias and Clinical Trials” (from 1987)  that a significant number of trials were never published (271/1041) based on a survey of researchers, and that many of these had negative rather than positive results . We know from the Shamliyan study that only some trials from 1992 onwards were retrospectively registered after the creation of ClinicalTrials.gov and its public launch in 2000. It is, therefore, reasonable to infer that an unknown, but not trivial number of trials have never been reported in any public form, and perhaps in no form whatsoever, and that this number increases the further back we go from the launch of the world’s first clinical trial registry in 2000.
Clinical trials 101
In order to find out whether a medicine or a treatment will work, researchers conduct complex studies called clinical trials that are divided into, typically, four different phases.
Before any testing is carried out on human volunteers, researchers will have run various pre-clinical laboratory experiments to see if a new drug shows promise in treating a disease. Such experiments start with computer simulations, followed by lab experiments which look at the interaction of the drug with specific cells. If the result is promising, the drug will be tested on animals—they may be healthy first, then diseased or not. All of this information will be integrated into a model that will make a prediction about the best dose for a specific patient population and how that drug should be administered in humans.
If pre-clinical studies show promise, the researchers submit an application to the Food and Drug Administration (FDA), in order to run the clinical trial.
Sometimes, an exploratory study may be conducted on a small number of sick patients to see whether a new drug that appears to work in an animal study meets the baseline criteria for working in humans. These Phase 0 trials are typically conducted to speed up the identification of new treatments for cancer by answering basic questions, like whether or not the drug enters the bloodstream.
More typically, most trials begin at Phase 1, which is when a drug is tested on 20 to 100 healthy volunteers, or people with the condition. At this time, researchers are looking to find out whether a drug causes side effects. These trials start with very low doses and careful monitoring as the dose increases. The goal is to find out how the drug affects the body.
These studies last several months. According to the FDA, approximately 70 percent of drugs move to the next phase. Because these trials are investigational, they do not have to be registered or have their results reported on ClinicalTrials.gov
If the drug or treatment is found to be acceptably safe, researchers move onto Phase 2, which is when trial sponsors examine whether a drug works in treating a disease and how. Phase 2 trials come in two versions: 2a, which are pilot trials, and 2b, which are more rigorous in evaluation. Typically, such studies involve up to 100 people with the condition or disease under investigation, although they don’t all necessarily get the same dose of the drug, especially in phase 2b. With the additional data, the safety and efficacy of the drug continues to be investigated.
Phase 2 trials can take up to two years, and they must be registered and have their results reported on ClinicalTrials.gov. According to the FDA, approximately 33 percent of drugs move from Phase 2 to Phase 3.
If the drug shows a positive benefit-risk profile (meaning the benefits outweigh any side effects), researchers will move to Phase 3. This is when the drug is tested against a comparable treatment/other medicine—or a placebo, if nothing comparable exists. Hundreds, perhaps thousands, of volunteers will receive either the new drug or the existing treatment/placebo and, ideally, the clinicians will not know who has been given which (they will be “blinded”).
By dividing the volunteers into an experimental group and a control group, and keeping other variables constant, the researchers try to eliminate factors in the volunteers that might bias the experiment. Ideally, both groups are randomized; randomizing means that there’s an equal probability that such factors will appear in both the experimental and control groups—and not one group only. This is why the number of volunteers in each group needs to be large. Given the large numbers of subjects and the length of time involved to do these studies, additional safety data is collected.
These trials can take from one to four years (and sometimes longer) to conduct, and they must be registered and have their results reported on ClinicalTrials.gov. The success rate, according to the FDA, is 25 to 30 percent of drugs.
If the Phase 3 trial shows that the new drug is more effective and/or safer than the existing standard of treatment, a new drug application (NDA) is submitted to the FDA for approval. The agency approves applications after studying the clinical trials and ensuring that they have been rigorously conducted and do in fact show the benefits claimed by the drug manufacturer.
If the drug is approved by the FDA, it can be marketed and sold to care providers and the public However, even once the drug reaches the market, it may continue to be studied for safety and efficacy over time and in diverse populations. These post-marketing studies are called Phase 4 clinical trials.
- Compliance with Results Reporting at ClinicalTrials.gov, Anderson et al., N Engl J Med 2015; 372:1031-1039 March 12, 2015 DOI: 10.1056/NEJMsa1409364
- Anderson et al. note that they may have missed some trials due to their non-registration, as well as trials of interventions that were not approved for marketing and thus not required to report results.
- Reporting of results of interventional studies by the information service of the National Institutes of Health, Shamliyan T., Clinical pharmacology : advances and applications. 2010; 2:169-176. doi:10.2147/CPAA.S12398.
- A National Survey of Provisions in Clinical-Trial Agreements between Medical Schools and Industry Sponsors, Shulman, K.A., N Engl J Med 2002; 347:1335-1341October 24, 2002DOI: 10.1056/NEJMsa020349
- Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study, Prayle et al. BMJ 2012; 344 doi: http://dx.doi.org/10.1136/bmj.d7373 (BMJ 2012;344:d7373
- Research without results: Inadequate public reporting of clinical trial results, Gopal, Ravi K. et al., Contemporary Clinical Trials, Volume 33, Issue 3, 486 – 491
- Clinical Research Involving Children: Registration, Completeness, and Publication, Tatyana Shamliyan, Robert L. Kane, Pediatrics May 2012, 129 (5) e1291-e1300; DOI: 10.1542/peds.2010-2847
- This sample was large enough to be statistically significant, which means that if we were to pull 20 random samples of 160 trials from the sample of 1,571, we could be 95 percent confident in getting the same results.
- Publication or presentation of results from multicenter clinical trials: Evidence from an academic medical center, Turer et al. American Heart Journal , Volume 153, Issue 4, 674 – 680
- Clinical trial transparency update: an assessment of the disclosure of results of company-sponsored trials associated with new medicines approved in Europe in 2012, Rawal et al., Curr Med Res Opin. 2015;31(7):1431-5. doi: 10.1185/03007995.2015.1047749. Epub 2015 Jun 9.
- Clinical trial transparency: an assessment of the disclosure of results of company-sponsored trials associated with new medicines approved recently in Europe, Rawal et al., Curr Med Res Opin. 2014 Mar;30(3):395-405. doi: 10.1185/03007995.2013.860371. Epub 2013 Nov 11.
- Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drugs approved by the FDA in 2012, Miller et al., BMJ Open 2015;5:11 e009758 doi:10.1136/bmjopen-2015-009758.
- Publication bias and clinical trials, Dickersin et al., Controlled Clinical Trials, Volume 8, Issue 4, 1987, Pages 343-353, ISSN 0197-2456, http://dx.doi.org/10.1016/0197-2456(87)90155-3.
One of the major problems in publishing negative results, or even confirmatory positive results, is that front line journals are unlikely to accept them. The ‘citation index’ of a negation or confirmation of previous results doesn’t carry enough kudos.
What is required is an open access (electronic?) journal that accepts negative and confirmatory results, perhaps in a simple proforma so that authors do not have to devote too much time and effort to the writing.
Researchers quickly lose interest in past and unsuccessful work. I know, from my own experience, that looking for a breakthrough leads to many dead ends. That exploratory knowledge is, itself, valuable but of low coinage. In fact, some labs do not publish even their positive results unless the paper is good enough to get into Nature, Science or some other highly rated journal.
The threshold for publication in the new ‘Journal of Corroborative Evidence’ should be fairly low (patient numbers, minimal introduction and discussion, etc) but should require good design and analysis.