Examining Mandometer(r) Founders' 10 "Reasons" Why Eating Disorders Are Not Mental Disorders – Part II

This is the last post in my mini-series on the Mandometer® Treatment. (Links to earlier posts here: Part I, Part II, and Part III). In this post I’m going to continue examining Bergh et al.’s reasons for why eating disorders are not mental disorders (#6-10). In my last post I omitted something important: I didn’t define mental disorders, but to avoid repeating myself, please see my comment on the topic here.

Bergh et al.’s reason #6 why EDs are not mental disorders:

Reason #6. Gender differences argue against an underlying mental health disorder. Women constitute more than 90% of eating disorder patients (Hoek & van Hoeken, 2003), but teenage males are more likely to have OCD than teenage females (Fireman, Koran, Leventhal, & Jacobson, 2001), and there are no differences in the prevalence of anxiety and anxiety-related disorders in male and female teens (Beesdo, Knappe, & Pine, 2009).

The ratio of women to men receiving treatment is NOT representative of the ratio of women to men struggling with an eating disorder. I think this is fairly obvious: Men who struggle with EDs face additional barriers in getting diagnosed (for one, our screening criteria are not optimized to identify male sufferers) and receiving treatment. (A good, short overview of the issue can be found in this open access article, “Eating Disorders in Men: Underdiagnosed, Undertreated, and Misunderstood“).

But even if men made up less than 10% of ED sufferers, so then what? And say they are more likely to suffer from OCD and other anxiety disorders.* So what? 

Bergh et al. are implying that the gender ratios should be the similar because their underlying premise (see previous post) is that anxiety disorders cause/lead to EDs, thus, if men are more likely to have OCD/anxiety disorders, they should experience eating disorders more frequently. However, a high prevalence of anxiety disorders among patients with EDs does NOT mean that anxiety disorders lead to EDs and it does NOT mean that there should be a high prevalence of EDs among individuals with anxiety disorders (see confusion of the inverse).

Contrary to Bergh et al.’s unstated premise, no one is suggesting that OCD or other anxiety disorders cause or lead to eating disorders. They seem to be committing yet another logical fallacy here: the straw man fallacy.

*Funny thing. Most of the data I found seem to contradict Bergh et al. (not surprising if you’ve read my past posts!): Females are as/more likely than males to suffer from OCD and anxiety disorders, though findings vary substantially depending on the population studies, methods, sample size, and so on. (OCD: here, here, here, here, and here; Anxiety disorders in general: here, here, here, and here; lay description here).

Instead, young women respond differently to skipping a meal than young men. When the young men miss dinner, they respond by eating about 30% more during the next day. Young women, on the other hand, eat about 20% less the day after they missed their dinner (Zandian, Ioakimidis, Bergh, Leon, & Södersten, 2011).

Knowing that these authors tend to fudge, omit, or lie about findings of papers they cite (even their own papers!), I looked up the Zandian et al. (2011) study. They examined 13 women and 9 men. (They had 21 women who participated in another related, but separate experiment.) THIRTEEN and NINE.

Notice how they said, “about 30% more” and “about 20% less” in the blurb above? This is what they wrote in their abstract: “We found that women consumed 12% less food after fasting and that men ate 28% more food after fasting.”

I mean c’mon, they can’t even accurately describe their own findings from just two years prior? On its own, it seems like an honest mistake, but taken as a whole (see parts I-III), well, it spells out a different story I’ll get to in the end. (Did I mention 13 women and 9 men?) The work was supported entirely by the Mando Group AB which owns the Mandometer® Treatment.

One can see how a reduction of food intake can lead to a further reduction in food intake, thereby making some women more vulnerable to developing an eating disorder when they engage in dieting.

Riiiiight. So, if Bergh et al. are arguing that anorexia nervosa is solely or almost solely a physiological response to reduced food intake how the VAST, VAST majority of dieters and people who skip meals DO NOT develop anorexia nervosa? What’s up with that? Could it be that, *gasp*, genetic and neurobiological predispositions that are unique to particular individuals) are involved?

But moving on…

Reason #7. Studies of the mechanism underlying increased physical activity in those with eating disorders argue against an underlying mental health disorder. Foraging strategies that include increased activity are engaged in humans when food is restricted (McCue, 2012; Noakes & Spedding, 2012), and physical hyperactivity has long been recognized as a symptom of anorexia (Gull, 1874; Södersten et al., 2008).

Foraging? Really? We are comparing excessive exercise in eating disorders to foraging in low-food environments? Somehow it is hard for me to connect the foraging behaviours that my C. elegans worms exhibited in low-food environments (here’s a semi-relevant link for anyone who, like me, likes this awesome model system) to the rigid and compulsive (but low levels) of exercise I briefly engaged in during the initial phases of my eating disorder.

Increased physical activity greatly worsens their physiological status, given that they are also starving. Although this behavior appears to be consistent with a psychiatric disorder involving self-injurious behavior, there is an alternative explanation for this behavior.

Something tells me their alternative explanation isn’t going to be what I’m hoping for.

In particular, when individuals lose body weight, their surface area through which they lose heat is not as affected, but the biomass producing heat is much diminished. Moreover, body metabolism is suppressed in individuals who are restricting their food intake (Speakman & Mitchell, 2011). Patients with eating disorders therefore feel cold continually (Gull, 1874; Luck & Wakeling, 1980, 1982) and a primitive, yet effective method of thermoregulation in such circumstances is increasing physical activity.

Except most don’t. Do some? Of course, malnutrition and low body fat do that. Do all? No. For more on this please see my first post where I went into detail looking into the studies Bergh et al. cite to support their hypothermia-driven over-exercise hypothesis. (Short story: The studies they cite aren’t nearly as convincing as they lead you to believe.)

Besides, this discussion is ignoring an important and more prevalent eating disorder: bulimia nervosa. Many BN patients exercise excessively, but guess what: they are not underweight and many do not lose much (or any) weight.

On the other hand, if one provides warmth to either patients with eating disorders or to rats in an animal model of anorexia, one can prevent the increased physical activity (Bergh et al., 2002; Carrera et al., 2012; Gull, 1874; Gutiérrez, 2013).

It is true that rats do seem to run on their wheel less when they are provided with a heat pad. It is true that it may reduce physical activity in AN patients, too. This, surprisingly, was actually suggested in an independent study (i.e., different group of people from Bergh et al.). The open access article is here. It is not a totally out-there idea. If you are cold, heat is good. We move around to keep worm when we are cold. It makes sense. But what about bulimia nervosa patients? 

Rather than being a psychiatric symptom, this behavior can therefore be understood as a normal physiological response to feeling cold and/or displaced foraging for food (Södersten et al., 2008).

I don’t know about you, but to suggest that excessive exercise in AN and BN is solely (or even mostly) driven by the desire to keep warm (or search for food?) seems, um, to dismiss the abundance of evidence we have for all of the other reasons individuals with eating disorders engage in excessive exercise. Nevermind that many AN and BN patients do NOT exercise excessively OR feel cold.

When my worms moved around on a plate without food, I don’t think they were thinking about losing weight, burning calories, or getting rid of the anxiety they felt for eating the last bit of their food. Rats and mice are more complex than worms, but not quite as complex as humans, to say the least.

Reason #8. Commonalities with the etiology [causes] and the treatment of obesity argue against an underlying mental health disorder. Just as anorexia and bulimia are generally thought to be the consequence of a complex mental disorder, obesity is often thought to be a consequence of a character flaw (Brownell et al., 2010). However, obesity appears to have the same etiology as anorexia and bulimia. That is when individuals skip meals, restrict their food intake, or speed through their meals, gut hormones are not available to limit food intake, and consequently, these individuals go on eating (Galhardo et al., 2012).

Bergh et al. are arguing for commonalities in causes for eating disorders and obesity. But hold on, weren’t they JUST saying above that when women skip a meal they eat LESS the next day?! But they just argued the opposite occurs in individuals with obesity: “these individuals go on eating.”

What… 

A small portion of those in this situation continue eating only little food slowly. The vast majority, however, simply gains weight. When we normalize food intake patterns for obese individuals in a manner similar to that used to treat anorexics and bulimics, their body weight is significantly reduced, and their health improves (Ford et al., 2010).

So they “normalized” eating in a group of obese adolescents and came to the conclusion that obese people should just eat less and they are obese because they eat too quickly. Now, I don’t know a lot about obesity, but something tells me it is a little bit more complex to treat than to tell people to eat less. Indeed, a recent study suggests that these messages might actually have the opposite effect.

Bergh et al. are completely ignoring gene-environment interactions. In both their discussion of EDs and obesity, they are ignoring the genetic and neurobiological predispositions that lead to particular behaviours in particular environments. 

In writing, “the vast majority, however, simply gains weight,” they are ignoring the mountain of research that highlights the genetic and epigenetic factors that predispose some individuals to eat more and/or gain weight in an environment with ample food. Just like they are ignoring the same type of research that suggests some individuals are more likely to develop eating disorders when they restrict their caloric intake.

They are shifting the blame entirely on the individual. Keep in mind that saying genetic/epigenetic factors are involved DOES NOT mean *at all* that individuals are NOT able to recover/change their behaviours/lose or gain weight. It says nothing about treatment interventions and their efficacy. What it does is that it paints a more complex picture of causality.

For more open access/lay article on obesity, see here, herehere, here, and here.

Reason #9. The success of a therapy in which eating behavior is normalized in patients with eating disorders argues against an underlying mental health disorder. In the initial randomized clinical trial, we found that 88% of the patients went into remission, compared to 7% of the control patients (Bergh et al., 2002). In both the study with 168 patients (Bergh et al., 2002) and the current study with 1,428 patients, the estimated rate of full remission was 75%, with only 10% of the patients relapsing within 5 years and 0% mortality. These outcomes compare favorably with standard care treatments and could be implemented easily in clinics that treat these disorders, much to the benefit of their patients.

I discussed my issues with their study mainly in my second post. I addressed how the fact that behavioural interventions are successful DOES NOT negate that it is a mental disorder in my lengthy comment to CB here.

Bergh et al.’s 2002 paper is hilarious. I can write another essay on it but instead I’ll just show you something hilarious that immediately caught my eye. Their remission criteria included (among other things) that the individuals no longer met the criteria for an eating disorder. Reasonable, right?

Check out this results table (the kind of table that is missing from their 2013 study). So first, note how they wrote 168 patients: They show data only for FOURTEEN. That’s right, 14. You gotta wonder why they couldn’t, like every other study ever just include the rest of their sample in a simple table, right?

But here’s the kicker. In brackets is the range of the values (the lowest and highest). So if you look at what I highlighted, the range for BMI of individuals considered in remission for AN ranges from 15.4 to 19.9.

Bergh - 2002 - Table 1

One possible explanation is that the participants were young, so a BMI of 15.4 might actually be a normal weight for someone who is like, say, 13. But they don’t mention average age of the entire sample. Besides, if they had very young patients, they should have switched to using percentage of expected body weight, which would normalize this data based on age.

Check out their table on the demographics/characteristics of their 168 participant sample:

Bergh - 2002 - Table 2

Now I don’t know, maybe addition works differently in Sweden, but 19+13 doesn’t add up to 168. You can’t just cherry pick data to show sample characteristics in a study like this. It doesn’t work that way. You can’t say we studied 168 patients but we are just going to show you data for 14 of them, because like, the rest are the same so yeah, just trust us. Nah-uh.

Reason #10. A partially successful standard of care. Family therapy has become a standard therapy for anorexia that shares certain aspects with our treatment. In particular, this therapy treats anorexia by coaxing affected individuals to increase their food intake without psychiatric treatment. However, without an effective feedback system during meals, this approach does not improve the outcomes of most individuals with eating disorders, and only affords a small improvement in those few patients who are both very young and very mildly affected (Bergh et al., 2006; Eisler et al., 1997; 2000; Fisher, Hetrick, & Rushford, 2010; Lock & le Grange, 2005; Russell, Szmukler, Dare, & Eisler, 1987). At the same time, those patients do not reach a normal body weight, their psychiatric symptoms typically persist, and they are at an increased risk for developing bulimia (Bergh et al., 2006; Geist, Heinmaa, Stephens, avis, & Katzman, 2000; Robin et al., 1999). Indeed, there was no difference between the groups after a long-term follow-up, very much like the long-term outcomes from psychiatric care (Ben Tovim, 2003). Appropriately, the authors of the family therapy study concluded that their results could be “attributed to the natural outcome of the illness” rather than to the therapy (Eisler et al., 1997, p. 1025).

Wait, why are they citing Eisler et al (1997) or Ben Tovim (2003) opposed to the much better and more recent Lock & le Grange (2005) study? (I blogged about that study in depth here, the comments are worthwhile to read, too.) They are citing old research going: See, this thing is crappy but ignoring the recent and much better studies showing that it actually isn’t that crappy at all.

But more more importantly, as I said in my previous post, I fail to see how the argument that our treatment modalities are sub par (and no one is debating that) argues against eating disorders being mental disorders disorders.

FBT works for some. I’m sure Mandometer® works for some, too. Many, however (myself included), are able to recover without either of those, suggesting that there isn’t one path to recovery. One clear-cut, sure-fire way to recover. As I said before, my issue with Mandometer® isn’t so much that I think their idea of focusing on eating and reducing exercise is wacky. It most certainly is NOTI very much agree with focusing on behavioural components initially. Absolutely.

Let me be absolutely clear:

My issue is not with the treatment as much as it is with their absurd lack of transparency in their methodology, in their statistical analyses, and in their data. My issue is the plethora of unsupported claims. My issue is that when you look at studies they cite to support their claims you find that they actually don’t support their claims at all. My issue is that they leave 378 patients unaccounted for in their 2013 study; the numbers simply do not add up. My issue is that Bergh et al. (2013) do not report ANY actual data about their remitted patients. None.

I have, personally, never seen anything quite like this in peer-reviewed literature to date.

Why is this a problem? I’ll let Gwyneth Owlyn explain, since she beat me to it in her comment on my last post:

Just curious, Behavioral Neuroscience got any explanation for a peer review process that failed to notice that some 378 patients went missing?

At present, I would suggest the takeaway (and not a particularly shocking one to me) is that the APA-published Behavioral Neuroscience journal allows for those with proprietary for-profit programs who are keen to disprove criticisms that their program might lead to fatalities; will fail to provide for remission; and may not be something that could be globally marketed, to present information akin to an advertorial.

Although I expect everyone who might be combing these comments is familiar with the term, an advertorial is an advertisement that is usually formatted and written to look exactly like journalistic content. The material reads as though there has been some journalistic due diligence to investigate and confirm the information and ‘facts’ within the ‘article’, but it is merely company-produced promotional material in its entirety.

Presumably some peer-reviewed journals now offer papertorials to their readership.

This is my takeaway too. This is the reason I spent four posts talking about the Mandometer® Treatment. Just because someone makes a claim, and just because that claim is published in a peer-reviewed journal, does not mean it is valid or true. It may not even mean that the peer-reviewers thought it was a good study (see last part of the comment).

Advertorials and other crap like that are nothing new, for more on these types of shady things, I highly recommend Carl Elliott’s White Coat, Black Hat: Adventures on the Dark Side of Medicine and, though dealing with slightly different issues, Ben Goldacre’s Bad Science: Quacks, Hacks, and Big Pharma Flacks (and Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients). Now, before you have the urge to throw your hands up in a sign of defeat, I’d like to say that demanding more transparency from peer-review and from any clinical trials (whether they are run by pharmaceutical companies, companies like Mando Group AB, or academics in fancy institutions) is a much better approach than vilifying everything. We need peer-review. We just need it to be better.

Anyway, I’m happy to say I’m done talking about this Mandometer®  stuff. Back to good research and science shortly! Andrea has two posts lined up that I’ve put on hold to write this mini series, so look forward to that.

References

Bergh C, Callmar M, Danemar S, Hölcke M, Isberg S, Leon M, Lindgren J, Lundqvist A, Niinimaa M, Olofsson B, Palmberg K, Pettersson A, Zandian M, Asberg K, Brodin U, Maletz L, Court J, Iafeta I, Björnström M, Glantz C, Kjäll L, Rönnskog P, Sjöberg J, & Södersten P (2013). Effective treatment of eating disorders: Results at multiple sites. Behavioral Neuroscience, 127 (6), 878-89 PMID: 24341712

Tetyana

Tetyana is the creator and manager of the blog.

8 Comments

  1. The Mandometer 2013 paper does not leave 378 patients “unaccounted for.” There were 1,428 patients enrolled. (p. 879) 737 were classifed as in remission. (Table 3) 105 were classified as withdrawn. (Table 3) 124 were classified as treatment failures. (Table 3). 462 patients were classified as “censored.” (Table 3) A patient was classified as censored if she was “still in treatment or dropped out for unknown reasons.” (p. 881) Adding together all categories equals 1,428. Therefore, all patients who were enrolled were accounted for.
    It was proper to “censor” the data for 462 of the patients. The patients still in treatment can’t be evaluated. The patients who dropped out can’t be evaluated. (Most studies of eating disorder treatment, by the way, report a dropout rate of around 20% — 40%.
    Dejong, A Systematic review of dropout from treatment in outpatients with anorexia nervosa,
    Int J EAt Disord 2012 Jul; 45(5): 635-47)
    Full text of the Mandometer paper is available for only $11.95 US. It can be purchased online at http://www.apa.org/pubs/journals/bne/index.aspx The paper is in Volume 127, No. 6.

    • Christopher,

      I covered this topic ad nauseum in my second post. I explained in detail the issue of the 378 unaccounted participants. In short: They mention in the caption of Figure 1 that 737 achieved full remission and 378 achieved partial remission. That is a total of 1115 patients achieving partial or full remission. This is discordant with everything else in the paper, particularly with Table 3, which I replicated in the same post (It is labelled Table 1 in my post because I label them differently for my own organizational purposes). The info in Table 3 omits any data on partially remitted patients and only includes remitted (doesn’t say full or partial, but I figure they mean full based on Figure 1 and the rest of the paper, 737), censored (462), withdrawn (105) and treatment failures (124). 737 full remitted + 462 censored + 105 withdrawn + 124 treatment failures = 1428. So what happened to the partially remitted patients from Figure 1, the patients they used, apparently for analyses leading to the graph in Figure 1F?

      Figure 1 caption clearly says: “737 patients fulfilled seven out of seven criteria and 378 fulfilled five out of seven criteria for remission.”

      I highlighted all of this in my second post VERY CLEARLY.

      Here is the link to the relevant images, in case you don’t want to scroll through the second post.

      Figure 1: https://scienceofeds.org/wp-content/uploads/2013/12/Bergh-2013-Figure-1.png
      Table 1: https://scienceofeds.org/wp-content/uploads/2013/12/Bergh-2013-Table-1.png

      I see you’ve gotten bored trolling ATDT and have migrated to commenting on any posts on this blog relating to FBT or, apparently, Mandometer. Congratulations on being SEDs first troll!

      I’m still waiting on your responses to my comments to you here and here. I think Liz might be waiting for a response to her questions here, too, though I can’t speak for her.

      I’m tired of repeating myself. You routinely quote the paper with seemingly no ability to fact check what Bergh et al. themselves write or integrate the discrepancies within the paper. You repeat the same thing in all of your comments. The same things I’ve already spent four lengthy posts explaining. I don’t have unlimited patience to repeat the same thing time and time again. Readers who are interested can freely read my second post, which examined the data presented in the Bergh et al. paper in depth.

  2. Tetyana,
    You are confusing two different things. Table 3 gives data for outcomes (censored, remission, withdrawal, and failure.) Those numbers add up.
    The Figure 1 caption says that 737 patients were fully remitted (consistent with Table 3) and adds that an additional 378 patients were partially remitted. Table 3 and Figure 1 are not inconsistent. Obviously, some of the patients classified as censored, withdrawn, and failed also were partially remitted. So what?
    In your earlier post, you complained that the authors “did not define ill anywhere in the paper.” False. On page 880 they wrote that all patients were diagnosed in accordance with DSM-IV. Consequently, the authors in fact did define “ill.”
    I recommend your readers study the actual Mandometer paper.

    • “Obviously, some of the patients classified as censored, withdrawn, and failed also were partially remitted. So what?”

      So what? Are you kidding me? If you don’t think this is a major concern (and one of many in this paper), then I’m sorry but we have nothing more to discuss on this issue. I mean if you think sloppy science is fine, great, there’s not much I can do about that and I’m not about to waste my time explaining the scientific method and the purpose of peer-review.

      And why wouldn’t they mention anywhere that the partially remitted patients were in the withdrawn, censored, and failed groups? (How could they fail if they partially remit, that’s mind boggling). If they partially remitted why were they still in treatment? If they were, then they shouldn’t be counted in the study PERIOD.

      Regarding the ill part: No, Christopher, you are wrong. Go back to my post and re-read it and then re-read the caption (or just read below):

      The caption in Figure 1 clearly says 251 were classified as severely ill and 486 as ill.

      My comment was, and I quote directly:
      “The authors did NOT define “ill” anywhere in the paper; they defined “severely ill,” but not “ill”. (All of the patients were diagnosed with an ED, so technically, weren’t they all ill?”

      On page 880 they wrote:

      “After the results of the eating evaluation were reviewed, the patients were diagnosed using the Diagnostic and Statistical Manual of Mental Disorders (4th ed., American Psychiatric Association, 1995).”

      And further below they wrote:

      “There were 251 patients who were classified as severely ill and treated initially as inpatients. They had a body mass index (weight/ height squared, kg/m2)  13.5, and/or a body temperature of  36°C, bradycardia ( 40 bpm), prolonged QTc-time, at risk of cardiac arrhythmia, dehydration 5 to 10%,  90/60 mmHg (hypotension), hypokalemia ( 3.2 nmol/L), binge-eating, vomiting several times each day, and suicidal tendencies.”

      They had 1428 patients enter treatment. 251 were “severely ill”, if you suppose that everyone else was “ill” (given that they were all diagnosed with an ED based on DSM-IV criteria, as I pointed out in my own comment), then it should be be: 1428-251=1177 classified as ill. If 486 were “ill” and 251 were “severely ill”, what were the other 691? Faking it? Quasi-ill?

      Please, Christopher, if you’d like to continue commenting, I’d suggest you directly address the points I raise(d) as opposed to going back to points raised previously, ignoring the questions I raise in my rebuttals, and ignoring the questions raised by others (i.e., Liz). Otherwise, this back-and-forth over the same thing (namely the number of patients in this or that figure/table) is boring because it has been addressed many, many times. I’ve never censored or banned ANYONE on this blog (except for censoring irrelevant weight/cal intake numbers that didn’t add to the comment but could trigger readers), but if this type of pointless discourse continues, I may.

      I thoroughly enjoy being challenged. I welcome readers to point out my mistakes, to ask questions, ask for clarifications, and question my interpretation of the findings. This is the part of science I loved when I was in graduate school. I was the odd kid that liked journal club and committee meetings, but disliked bench work (not so odd for that last one, but hey). This is still what I enjoy on the blog. Your comments have façade of raising interesting and worthwhile questions, or pointing out my own errors, if it wasn’t for the fact that I already addressed those things, usually in previous posts and/or comments, potentially confusing readers of this post and leading them to think that I haven’t addressed your points.

      I continue to respond to your questions not because I am trying to convince you of this paper’s poor quality. Not at all. I keep responding so that readers do not think I’m ignoring your points, thus suggesting I cannot answer them (that I’m ducking/avoiding them).

      I find it very interesting that you are so adamant about defending this corporation and these “professionals” when just a year and a half ago you wrote on another blog (http://www.laurassoapbox.net/2012/07/parents-are-professionals.html),

      “In my opinion, parents will actually learn more useful information from other parents whose kids have recovered from anorexia nervosa than from all the eating disorder professionals in the world combined. If anyone feels differently, please say so. I’d be interested to hear an example of how anything that has been said by a professional at an eating disorder conference has actually been evidence-based, conclusive, and demonstrably helpful, rather than speculative.”

      You also wrote,

      “However, at the present time, I think it is not unreasonable for parents, not professionals, to be in charge.”

      It seems you’ve changed your mind? I mean that’s fine, after that thread, where I also commented, I read the Lock le Grange paper, evaluated the evidence, and blogged about it in what I think was a fairly balanced post.

      Also, readers: If anyone wants to read the paper, email me. The paper is barely worth the kilobytes of space it takes up on my computer, nevermind $12. Donate the $12 to Charlotte’s Helix instead.

  3. Hi Tetyana,

    thanks for the post! The part about the Bergh et al. (2002) paper caught my eye. I thought “Wait… what? Something with tables like these get’s past peer review?”, so I looked the paper up.

    It seems the table-issue is a bit more complicated. Basically, Bergh et al. seem to report two studies in one paper. A quote from the first page of the article (last paragraph of the introduction):

    “We have reported preliminary results in anorexic patients by using this method (27) and now report the results of a pragmatic RCT. (…) we also report the rate of remission and relapse in a large group of patients treated with our method.”

    So the first study is an RCT of their intervention, including a treatment group of 16 patients and a control group of 16 patients. Table 1 (see above) shows the characteristics of the two groups (in one table; Bergh et al. mention that they didn’t differ, so they put them in one table; separate tables for the groups would have been nice).

    Concerning the 168 patients in the second study, the paper doesn’t display any tables. The authors mention:
    “The characteristics of the patients, including those with an EDNOS, were similar to those in Table 1 (data not shown)” (third page of the article, in the Patients paragraph).

    Table 2 (see above) again refers to the small RCT sample. Confusingly, the table only includes 14 people, instead of 32 as in Table 1.

    My guess is, that Table 2 only includes remitted patients from the treatment group, at least the text says:
    “Treatment had a major effect on remission rates; 14 of 16 patients in the treatment group were in remission after a median of 14.4 months (range 4.9–26.5)” (third page of the article, third paragraph from the bottom right).

    So it seems that all the confusion comes from two studies being reported in one article without being clearly labeled as such. For example, Bergh et al. first explain the methods for both studies and then the results for both studies. This is different from the structure I know, where you first report methods and results for one study and then methods and results for the next study. At lot confusion could have been avoided if they had provided suitable notes for their tables, clearly stating that they refer to the small RCT sample only. Or better yet: add another table for the large 168 patient sample, instead of saying it’s similar to the small sample. I still don’t quite get why the reviewers didn’t ask for something like this, but at least the tables are a bit less confusing to me now.

    P.S.: Sorry if my comment sounds strange language-wise, I’m no native speaker.

    • Hi Natalie,

      Thanks for your comment! And no worries about the language thing: Everything is totally clear to me :-).

      I read the paper so I know that’s what they did, but like you, I was mind-blown (okay maybe you weren’t mind-blown but I was) that this got through peer-review because:

      1. There are no tables for the larger and more important study. None, I mean Huh? Have you ever seen anything like that before? I haven’t.

      2. They omit a lot of data by saying there were no differences between the groups. Now, as I’m sure you know, this isn’t weird. Researchers do that. But rarely, if ever, have I see SO MUCH data omitted using that excuse.

      3. Your point on the structure of papers that report multiple studies is also crucial. Maybe I’m being cynical, but I feel like this is purposeful. If they used the typical structure, then it would’ve been much clearer that they actually show no data tables for their 168 patient RCT. April et al (2012)’s study on over-exercise and suicidality reported FOUR studies in one paper and they followed the structure you mention; like a mini paper within a paper. They also had a discussion for each part and a large discussion overall. (Like a thesis with several chapters.)

      4. Even between the two tables the number of patients varies, and the numbers they report are small: n=4 for BN patients? C’mon, this isn’t a PET study where that might actually be quasi-acceptable. I find it hard to believe that the data for 168 patients is similar in variability to a group of 14 or 16 patients. I can understand picking the best photo you have to show in a paper as an example: Everyone does this. We all pick the best neuron picture, the best heat map, the clearest brain image. I’m sure anyone reading this who has been involved in research has, at one point, been tasked with finding the prettiest picture to put on a poster, in a paper, or in a grant. I’ve done this many times. Combing through tons of heatmaps to find the nicest one, combing through tons of synapse pictures to find the least fuzzy one, and spending a day taking photos on the fluorescent confocal microscope JUST to get a really pretty one to put in a grant. But then we report the data on ALL of the samples. We report data on all of our n’s.

      I realize that for non-academics and people who are generally unfamiliar with these things, it seems like we are quibbling over small formalities, but I do NOT think that’s the case. These are standards that exist to facilitate transparency of methodology and data. They are important. Now, I don’t know about you Natalie, but from the peer-review that I’ve experienced (not personally per se as much as observing the comments that reviewers made on papers that I’ve contributed to), I am shocked that reviewers didn’t pick up on this. Actually, correction. I am not shocked: Either Mandometer people paid money to get this stuff through or have connections with the editors (and I’m sure you know this helps, as much as academics would hate to admit this publicly) and the editors ignored the peer-reviewer’s comments (which also happens routinely enough), or something else entirely.

      • Thanks for your reply!
        I didn’t clearly say so, but I am also more than surprised that the reviewers did not ask for further tables and clarification. I mean it’s not a no-name journal, this article was published in PNAS, which is one of the higher ranking journals in psychology.

        I absolutley agree with you, that data for the bigger sample should have been reported. I don’t have much direct experience with peer review yet, but if had been a reviewer of this paper (I’ve yet quite a way to go to being a potential reviewer for something that gets published in PNAS) I would have asked for the data of the larger sample. Even if there really was no difference to the first simple, it would have been nice for the reader to see and check it for himself.

        So overall I agree, it’s still a no go. Even if the tables they present are not wrong as such, it’s no good scientific practive to not show demographic and other variables for the majority of your overall sample.

        • I’ve never peer-reviewed anything myself for a professional journal, but I’ve see the reviews that papers my name’s been on (okay 2), as well as others in my lab and my friend’s papers… The reviewers routinely ask for so many experiments that it took another 6-18 months of work to get the paper through.

          PNAS has an impact factor of almost 10. But then again, I’ve seen crap published in Neuron, too. Look at the comments to this Nature Neuroscience paper, for example, it seems the reviewers missed a major problem with the statistical analysis: http://www.ncbi.nlm.nih.gov/pubmed/24292232 (It is in a different field, I don’t know your background. F1 and F2 just refer to subsequent generations.)

          Like Nancy Kanwisher (of MIT) commented in a post I linked to,
          “A third point: Some of the fault lies in the journals themselves, and the fancier the journal the greater the problem. I once participated in one of those infamous reviews for Science where I wrote a scathing review, and so did the other reviewer, and the paper was published. When I confronted the editor, saying there was a very good chance that the paper would not replicate, he basically said that was fine, they were more interested in novelty than replicability. Wow. That had in fact exactly my impression of the priorities at Science, [b]ut it was shocking to hear it endorsed explicitly. These priorities, which are not unique to Science (but seem particularly bad there) are a menace to our field.”

          Unfortunately, the above is not surprising to me at all. Sigh.

          My whole issue with this group of researchers is that their data and methodology are rarely clear. That’s always been my main issue. It is a huge problem, particularly when you factor in their conflicts of interest. I don’t think it is cynical of me to think they are publishing crap stuff just so that they can put on their website that they’ve published a study which says X and Y. Who is going to (and who has access!) to actually go and see if those claims are valid.

Comments are closed.