5.14.2009

Dept. of Overstatement

I happened upon a nifty little article at Slate about the limitations of medical studies, and how it is hard to extrapolate long-term benefits of various interventions from the short-term measures used in those studies. It's worth reading in its entirety, and the author makes it more accessible by likening the studies to (don't laugh) reality shows.
Paul Bataldan, who co-founded the Institute for Healthcare Improvement, once observed that "every system is perfectly designed to get the results it gets." Reality TV participants desperate for fame or love sing (and lip-sync) shortened, preselected songs or go on over-the-top dates in fabulous locations, followed by a team of camera operators. The ostensible goal—the one declared to the audience—is to identify the most talented recording artist or most compatible couple. But as Bataldan might observe, the shows are instead designed to win over television viewers, which is an altogether different (and possibly incompatible) goal.

The parallels to a major clinical drug trial are uncanny. In the federal Multimodal Treatment Study, hundreds of kids with ADHD, whose families were desperate enough to enroll them in a randomized study, entered a well-funded and highly supervised National Institute for Mental Health program complete with specialized therapy, regular evaluation by developmental experts, and careful drug prescription—a setup that's about as realistic as a date on The Bachelor. Within that very unusual, closely monitored environment, as reported in 1999, stimulant medications caused modest improvement after about a year. In response, use of these products surged nationwide, and Ritalin and its peers became household brands. But in March, the researchers described what happened after the lights went out. In their subsequent years in the real world, the drug-treated kids ultimately ended up no better off than the others.

Epidemiologists call this the problem of "surrogate endpoints," and it's no surprise to fans of reality television. Garnering the greatest number of text-messaging votes after a brief performance doesn't always mean you'll be a successful pop star; winning the final rose after an on-air courtship doesn't mean you'll have a happy marriage; and getting higher scores on a simple rating scale of attention-deficit symptoms doesn't mean you'll later succeed in school. In medicine, this problem happens all the time.

Few drug-trial studies have the time or money to study the actual health outcomes that people care about, such as whether the middle-aged man avoids a heart attack after a few decades, the hyperactive first-grader holds down a good job someday, or the menopausal woman remains free from a hip fracture when she's elderly. Waiting for these events would stifle any meaningful innovation, so doctors pick surrogate endpoints, which they hope serve as short-term checkpoints. Thus drugs trials for the preceding examples may just decide to measure the middle-aged man's cholesterol level, the youngster's symptom checklist for hyperactivity, and the woman's bone density with a DEXA scan.

This is really quite a clever way of framing the discussion. What isn't clever, however, is the misleading things the article says about ADHD medication.

It starts with the link's headline at Slate's main page, which was what initially caught my eye. It read something along the lines of "Surprising New Evidence that Ritalin Doesn't Work." [Oh, poo. Since I started writing this post, Slate has reconfigured its links, and the initial link heading is gone. Perhaps you would be willing to take my word for it?] Suffice it to say, this is drastically overstating what the study found.

Then there was this, from the article itself:
Though it may seem a stretch, the lessons of reality TV can help us understand why, for example, many parents recently were told—with the suddenness of Jason Mesnick's dumping his fiancee Melissa Rycroft on the last Bachelor finale—that a large federal study contradicted its initial findings and concluded that drug treatment for attention deficit disorder had no benefit in children who were followed for six to eight years. These results put into question the widespread use of stimulants like Ritalin and Concerta, which were prescribed roughly 40 million times last year, and led to an acrimonious public debate among the study's co-authors. [emphasis mine]
What were the actual results?
Using reports from parents and teachers as well as self-reports from the children, now high school-aged, the researchers found that the youth’s functioning remained improved overall compared to their functioning at the beginning of the study, suggesting that available treatments can still be effective. However, they also found the following:
  • The eight-year follow-up revealed no differences in symptoms or functioning among the youths assigned to the different treatment groups as children. This result suggests that the type or intensity of a one-year treatment for ADHD in childhood does not predict future functioning.
  • Youths with ADHD still had significantly more academic and social problems compared with peers who did not have ADHD. They also had more conduct problems including run-ins with police, as well as more depression, and psychiatric hospitalizations.
  • Some differences emerged among the youths with ADHD. For example, youths who had responded well to treatment and maintained their gains for two more years after the end of the trial tended to be functioning the best at eight years.
  • A majority (61.5 percent) of the children who were medicated at the end of the 14-month trial had stopped taking medication by the eight-year follow-up, suggesting that medication treatment may lose appeal with families over time. The reasons for this decline are under investigation, but they nevertheless signal the need for alternative treatments.
  • Children who were no longer taking medication at the eight-year follow-up were generally functioning as well as children who were still medicated, raising questions about whether medication treatment beyond two years continues to be beneficial or needed by all.
Friends, the very measured findings above are not the same as "had no benefit." It appears that the benefits may wane, that the medication will not correct the entirety of the problems experienced by children with ADHD (a complicated and multi-factorial diagnosis) and that compliance drops off over time. Also, children who initially respond better to treatment do better in the long run. No surprises to any of that.

I am no great fan of ADHD medication in this country, which I think is too readily prescribed for too many children with too many problems related to too many factors. I am not particularly keen to argue for their widespread use. But the article oversells itself by hyping an eye-catching finding that doesn't necessarily exist to the degree they imply.

Which is a shame. The article, which is otherwise excellent, didn't really need to be quite so sloppy, even if doing so meant more people were likely to read it.

2 comments:

  1. > The article, which is otherwise excellent, didn't really
    > need to be quite so sloppy, even if doing so meant more
    > people were likely to read it.

    Showing that media created for mass consumption is also like a reality show, right?

    ReplyDelete