BMJ The general medical journal website.
Home Help Search Archive Feedback Table of Contents
[Advanced]

BMJ  2004;329:748 (25 September), doi:10.1136/bmj.329.7468.748
Extract of this article
Email this article to a friend
Respond to this article
Read responses to this article
Related letters in BMJ
Download to Citation Manager
This article has been cited by other articles
Search Medline for articles by:
Lenzer, J.
Alert me when:
New articles cite this article

reviews

Press

Journalists on Prozac

Did major media outlets fail to ask the right questions about depression study?

Recent headlines announcing the results of a US study of clinically depressed adolescents were unequivocal: "Talk and pills best for depression in kids" (CNN.com); "Prescribed drugs with therapy aid teen depression" (Wall Street Journal); "Combination aids depressed youths" (New York Times); and "Prozac plus talk is best for teen depression" (Washington Post).

The headlines were matched by the exuberant claim of the study researchers who said that 71% of teenagers treated with a combination of fluoxetine (Prozac) and cognitive behaviour therapy improved—compared with only 35% of teens treated with placebo alone. That claim, published last month ( JAMA 2004;292: 807-20[Abstract/Free Full Text]), was based on data from the Treatment for Adolescents with Depression Study (TADS), a nationwide, 13-site study of 439 depressed adolescents.

But several points that might concern both laypeople and scientists went unreported by the New York Times, Washington Post, Wall Street Journal, US News & World Report, the Associated Press, National Public Radio's (NPR) Science Friday, and CNN.

Not one of these media giants reported that two of the study's four treatment arms were unblinded—and that it was in one of these unblinded arms that the purported benefit of fluoxetine was described. Lead author Dr John March of Duke University, North Carolina, declared fluoxetine plus talk therapy "the big winner" in an interview with wire service HealthDay reporter Serena Gordon—a phrase he would repeat a few days later to NPR's Ira Flatow. Yet the NPR interview did not mention that in the winning arm teens knew that they were receiving fluoxetine and not placebo, nor did that information appear in Gordon's article. Nor did news reports mention that in the two blinded arms, fluoxetine failed to perform better than placebo on the key Children's Depression Rating Scale. That news could not be found in reports by NPR, HealthDay, the New York Times, Wall Street Journal, or Washington Post. Nor were there interviews with or comments from methodologists, who might have assessed the robustness of data derived from such an unblinded treatment arm.

Dr March told the BMJ that "none [of the journalists] had read the methods paper." He added, "Most were interested in the main take home message of the TADS, not in methodological sub-issues, none of which seriously call into question the main results." News releases by JAMA and the National Institute of Mental Health (NIMH)—intended for the media—also did not mention these points, although the news release by Duke University acknowledged the unblinded nature of two arms of the trial.

There is an apparent consensus among physicians about the value of fluoxetine. Both US and British medical authorities have concluded that fluoxetine is safe and effective for depression in children. Even such notable critics as Dr Andrew Mosholder (whose report finding increased suicidal behaviour among adolescents treated with antidepressants was suppressed by the US Food and Drug Administration) concluded, in his February report, that fluoxetine was safe and effective for children (BMJ 2004:329: 307 [Free Full Text] ).

Journalists should dig deeper when researchers claim a treatment is effective

But some commentators argue that it is precisely when scientific opinion appears uniform that journalists need to be especially careful to scrutinise their sources and ask critical questions.

After the US Senate Select Committee on Intelligence concluded that "group think" led the Central Intelligence Agency to inflate their intelligence assessments about weapons of mass destruction in Iraq, the New York Times and Washington Post admitted failing to identify the interests of their sources and failing to examine carefully their claims.

Journalists can play a critical role in preventing "prevailing opinion" from becoming "group think" by adhering to basic principles, such as those put forth by the US based Association of Health Care Journalists. The association encourages writers to "investigate and report possible links between sources of information (studies or experts) and those (such as manufacturers) who promote a new idea or therapy" and to "present diverse viewpoints in context."

Dr Peter R Mansfield, director of Healthy Skepticism (www.healthyskepticism.org) and research fellow at the University of Adelaide, Australia, said journalists should ask questions about study methodology, look carefully for the completeness of data, and challenge how the data are spun. "They need to question how benefits and risks are reported and how their impact can be exaggerated or minimised by researchers through various statistical manipulations. And they need to know how to find credible experts who can critically assess a study's validity."

Journalists should dig deeper when researchers claim a treatment is effective, said Dr Mansfield. "Effective is not a yes or no dichotomy. They need to ask, `How effective?' and `Do the risks outweigh the effectiveness?' Six of the seven suicide attempts in TADS were made by adolescents treated with fluoxetine. Only one child not on fluoxetine attempted suicide. This wasn't statistically significant, but it may be clinically significant when six times as many children on the drug attempt suicide as those on placebo. The data do not support the claim that the benefits outweighed the risks because this study was not powered to determine the risk of suicide."

The TADS researchers failed to report negative data at the same time that they reported positive data. Using a "dichotomised" scoring system on the Clinical Global Improvement (CGI) scale, TADS researchers reported only scores of 1 (very much improved) or 2 (very improved). Negative scores were not reported.

Asked how readers could be assured that fluoxetine didn't just "squeeze the middle"—causing some patients to improve while others worsened—Dr March told the BMJ that that wasn't the case. However, when he was asked to supply the BMJ with the complete CGI results, including negative scores, he declined, saying "That will be part of a secondary analysis."

Dr Richard Glass, deputy editor of JAMA, wrote an accompanying editorial in which he concluded that the TADS data showed that "treatment of carefully evaluated adolescents with moderate to severe major depression can be effective... " but that the positive findings "must be qualified" by the "open treatment with fluoxetine." Dr Glass declined to respond to an inquiry by the BMJ about whether the JAMA editors were given the results of the unreported negative outcomes on the CGI scale. When asked why JAMA did not publish the negative CGI scores, he said it wasn't a fair question because it "implies that no negative data were reported." JAMA reported negative outcomes on other scales.

Many news reports also did not describe the financial relationships of the study authors with interested parties—or even frankly misstated them. Most news accounts described the study as "publicly funded." US News & World Report's Nancy Shute wrote that the study was "significant because it is one of the very few studies of antidepressants that were not financed by a drug manufacturer; instead, backing came from the National Institute of Mental Health."

What Ms Shute and many others did not mention was that the lead author, Dr John March, and five of his co-authors had received funding from Eli Lilly, manufacturer of the study drug, even though these disclosures were made in the JAMA article.


Jeanne Lenzer, medical investigative journalist

Kingston, New York state, USA jeanne.lenzer@verizon.net


Related letters in BMJ:

TADS study raises concerns
Jon Jureidini, Anne Tonkin, and Peter R Mansfield
BMJ 2004 329: 1343-1344. [Letter]



This article has been cited by other articles:


Home page
BMJHome page
J. Jureidini, A. Tonkin, and P. R Mansfield
TADS study raises concerns
BMJ, December 4, 2004; 329(7478): 1343 - 1344.
[Full Text]

Rapid Responses:

Read all Rapid Responses

Concerns about the TADS study Jon N Jureidini, et al. bmj.com, 25 Sep 2004 [Full text]
Lenzer's Piece Exemplifies Journalistic Errors Nicholas A. DeMartinis bmj.com, 1 Oct 2004 [Full text]
Dr DeMartinis is Mistaken Jeanne Lenzer bmj.com, 11 Oct 2004 [Full text]
Statistical Analysis in TADS Stephen L. Black bmj.com, 11 Oct 2004 [Full text]
Reply to Stephen Black and Nicholas DeMartinis Peter R Mansfield bmj.com, 15 Oct 2004 [Full text]

Extract of this article
Email this article to a friend
Respond to this article
Read responses to this article
Related letters in BMJ
Download to Citation Manager
Search Medline for articles by:
Lenzer, J.
Alert me when:
New articles cite this article


Home Help Search Archive Feedback Table of Contents
BMJ The general medical journal website.
© 2004 BMJ Publishing Group Ltd