Skip to main content

Weight Loss and Diet vs Water - Bad Science or Bad Reporting?

The discussion surrounding non-nutritive sweeteners (NNS) is one that often involves more sentiment than science. Claims regarding cancer and a litany of other conditions have been claimed and noted in the literature. These claims often fail to take into regard the concept of 'dosage'. If you are interested in understanding how the safe intake dosages were established, and address claims regarding safety of artificial/natural NNS, read these 2  recent reviews (for those who don't trust science, I'd like to note that Shankar's is quite conservative) (1,2). I'd also encourage you to read Colby Vorland's posts on this topic: here.

The current study (3), published in Obesity, that's been making headlines discusses a 12 week randomized trial that looked at the effects of consuming water vs NNS beverages during a weight loss program. The participants met with either an RD or clinical psychologist weekly to discuss behavioral strategies to promote weight loss (topics included self monitoring, portion sizes, reading food labels, phys activity, weight loss maintenance). Individuals had their Resting Metabolic Rates (RMRs) determined by Bioelectrical impedance (BIA), and caloric intakes were adjusted by groups leaders to achieve weight loss of 1-2lbs/wk. Phys activity targets were set and assessed by reporting physical activity in logs turned in weekly and wearing an activity montor armband for 1 week during the 12. The aim was to lose 1-2lbs/wk.

Intervention:
Participants were randomized to 1 of the 2 groups:
NNS group: consume 24fl oz of NNS**/day + water
H20 group: consume 24fl oz of water + no NNS beverages, could consume NNS foods
Participants were given coupons and asked to record beverage intake - paid $50 to complete 9 of 11 food/bev logs
**NNS could be natural or artificial - not that I think this matters but for those who would dismiss this immediately because they think this group was consuming artificial sweeteners, we don't know that - many could've been consuming truvia/stevia.

Group characteristics were as follows:
NNS group: n =158, BMI=33.92, age = 48.3
Water group: n=150, BMI= 33.30, age = 47.3
Measurements except height taken at baseline and 12wks
completed questionnaires at baseline and 12wks

The primary outcome was a body weight change across 12 weeks. It was hypothesized that there would be no difference. Significant differences in body weight change was determined to be weight loss within 1.7 (3.75lbs)kg of the other group. The authors calculated power required to identify weight loss of .57kg +/- 3.9kg and determined each arm would need 150 participants, which they met.

The Results:
NNS group lost, on average more weight (1.85kg, outside the equivalence range). In the water group, 43% of participants lost >5% of their BW, while 64.3% of the NNS beverage group lost >5%. The NNS also had statistically significant greater decreases in total cholesterol and LDL, as to be expected with greater weight loss. Adherence and physical activity were not different between groups.

Major Limitations: All participants were NNS drink consumers. The NNS group didn't have to change behavior, whereas the water group did. We don't have information on what the water group replaced their normal consumption of NNS beverages with. The study was also only 12 weeks long - I really wish the authors would've waited until they had longer term data, but as they note, adherence fails. UPDATE: these limitations affect the water group and may explain why they lost slightly less weight than the NNS. However, this does not negate the fact that the NNS group lost weight. I will be the first person pointing out this limitation if any industry tries to claim that "diet" beverages are superior to water for weight loss. The authors do not conclude this.

Minor Limitations: Activity monitors were only worn for 1 week of the 12. If they were worn for 12, this could help to explain weight loss differences. While a relatively minor critique, I'd be interested to see how caffeine consumption changed between groups - many NNS beverages (e.g. Diet Coke) contain caffeine and depending on what the water group replaced their formerly consumed NNS with, they may have inadvertently cut caffeine out, moved less/burned less calories. Just a passing thought. Also, if we wanna get picky, BIA isn't the best way to determine body comp - likely not affecting the results though.

Conclusion: In the short term, NNS beverages aren't destroying weight loss efforts. The preliminary data doesn't support the notion that NNS consumption by dieters leads to seeking out other sweet foods and overcompensating, in a free living setting, over a 3 month period of time.

As a side note, this has gotten a lot of bad press because it's 'industry funded'. I've met Gary Foster (I did my undergraduate at TempleU) and consider him a great researcher - it's easy to sit behind a computer and accuse scientists of being bought out, attaching a face is much different. The bigger issue here, for me, is media reporting of the study - the studies limitations are discussed and the language is relatively mild; the authors were comparing NNS to 'the gold standard' of water. Their introduction clearly lays out the story of people's doubts about whether NNS are effective for weight loss - NOT whether they are 'better' than water. Their conclusion states "These results strongly suggest that NNS beverages can be part of an effective weight loss strategy and individuals who desire to consume them should not be discouraged from doing so because of concerns that they will undermine short-term weight loss efforts". Now, let's take a look at the media titles:

1. Medical News Today: "Industry-funded study implies diet soda is 'superior to water for weight loss'" - see here.
2. D.C.'s WTop:  "Diet drinks more effective than water in weight loss" - see here.
3. NPR: "Could Diet Soda Really Be Better Than Water For Weight Loss?" - see here.
Who is to blame here? Industry funded scientists trying to answer a legitimate research question that note the limitations of their study, or media outlets/reporters who twist study results to make an inflammatory, news-worthy study. At least sciencedaily gave a decent one: "Clinical trial reaffirms diet beverages play positive role in weight loss" - see here. Why aren't we all standing together and addressing how poorly scientific media covers nutritional sciences? There's no need to be divisive when we don't have to.

If you want artificial sweeteners to be bad, and don't want to recommend them to others, that's fine - don't. I'll be the first to recommend unsweetened green tea over a diet coke any day. But discrediting scientists due to industry funding shouldn't be the automatic conclusion. If you don't like industry funding, accept that very little research will ever be done.

There's no need to be divisive if we don't have to - but I feel the need to be, because bias did rear it's ugly head, in the form of the anti-artificial sweetener/science crowd - check out the comments section:
http://www.huffingtonpost.com/laurie-david/aba-diet-soda-study_b_5417188.html

1. http://www.ncbi.nlm.nih.gov/pubmed/23891579
2. http://www.ncbi.nlm.nih.gov/pubmed/23845273
3. http://onlinelibrary.wiley.com/doi/10.1002/oby.20737/abstract






Comments

Popular posts from this blog

Beware the Meta-Analysis: Fat, Guidelines, and Biases

Headlines were abuzz this week, reporting that a new review of randomized controlled trials at the time of the low-fat guidelines didn't support their institution. Time , Business Insider , and The Verge all covered the topic with sensationalist headlines (e.g. 'We should never have told people to stop eating fat' #weneverdid). I won't spend every part of this blog picking apart the entire meta-analysis; you can read it over at the open access journal, BMJ Open Heart (1) -- (note, for myself, i'm adding an extra level of skepticism for anything that gets published in this journal). I'm also not going to defend low-fat diets either, but rather, use this meta-analysis to point out some critical shortcomings in nutritional sciences research, and note that we should be wary of meta-analyses when it comes to diet trials. First off, let's discuss randomized controlled trials (RCTs). They are considered the gold standard in biomedical research; in the hierarc

On PURE

The PURE macronutrients studies were published in the Lancet journals today and the headlines / commentaries are reminding us that everything we thought we think we were told we knew about nutrition is wrong/misguided, etc. Below is my non-epidemiologist's run down of what happened in PURE. A couple papers came out related to PURE, but the one causing the most buzz is the relationship of the macronutrients to mortality. With a median follow up of 7.4 years, 5796 people died and 4784 had a major cardiovascular event (stroke, MCI). The paper modeled the impacts of self reported dietary carbohydrate, total fat, protein, monounsaturated (MUFA), saturated (SFA), and polyunsaturated (PUFA) fatty acid intakes on cardiovascular (CVD), non-CVD and total mortality; all macros were represented as a percentage of total self reported energy intakes and reported/analyzed in quintiles (energy intakes between 500-5000kcals/day were considered plausible..). All dietary data was determined by a

Want To Buy: A Placebo

A well-designed/performed, double-blind, randomized, placebo-controlled trial provides a high level of certainty about the effectiveness of an intervention. In scientific training, the need to utilize a placebo relative to your variable of interest is one of the first things you learn when designing an experiment. As many in the basic sciences and evidence-based medicine fields have become more interested in nutrition and its impact on health/biology (their interest is well-justified), there has been insufficient appreciation for the difficulty in performing nutrition research. This day 1 principle of "placebo-controlled" poses a particular challenge for many nutrition experiments: there is no placebo.  Consider an example that actually plagued causal inference in nutrition history: It was known that feeding diets high in saturated fatty acids was associated with higher LDL. Does that mean that saturated fat raises LDL? How would you design a study to show