Skip to main content

Considering Evidence-Based Nutrition: The RCT


The double blind, randomized, controlled trial (DB-RCT) is often considered the gold standard of medical research, and indeed, it is arguably the only way to demonstrate causality. The DB-RCT is, in theory, designed for testing the efficacy of pharmaceutical drugs, and thus, many in the medical profession rely on RCTs heavily when deriving courses of treatment for patients and the protocols of 'evidence based medicine'. With seemingly everyone's increasing interests in nutrition, especially from other members of the medical community not formally trained in nutrition, I thought it pertinent to discuss the RCT in nutrition research, because, while repeated consistent results from RCTs can establish causality, it is far from simple to interpret. Let's explore:


1. Blinding - It is ideal for a RCT to be double blind, to eliminate as much bias as possible that could lead us to doubt the true effects of the intervention. Someone (and/or their doctor) who knows that they are getting a statin might do a lot of different things than someone who doesn't. However, when it comes to nutrition, blinding is often impossible. While someone might not know that they are the 'low carb' arm of a treatment, they'll certainly start to realize that the diet which they are provided with has very limited fruits, starchy vegetables, grains and tubers. Indeed, you can't really blind someone with functioning taste buds to a low or high salt diet (and this could potentially lead to confounding e.g. low salt diets tend to be unappealing, which could lead to weight loss, which is known to improve blood pressure). Even when it comes to some versions of supplement trials, it may not be totally blind - anyone whose burped after a taking a fish oil pill can imagine why this is. Consider the treatment given, and whether the investigators provided any extra measures to ensure us that other factors can't account for the effect seen.

2. Multiple interventions - Often, nutrition research may try to make a claim about a specific dietary component, when they are not able to. This might seem obvious - if you randomize someone to a low carbohydrate diet, they will often not only eat less carbs, but eat different foods, with different levels of fiber and types of fatty acids, as well as vitamins and minerals. They'll also eat different tasting foods. A low carb diet may be a high fat diet, a high saturated fat diet, a high protein diet, a low fiber diet, and/or a low palatability diet. Interventions that use foods are almost inherently performing multiple interventions (though researchers are doing a better job of controlling for this). Beware the authors' interpretation of the intervention, because the effects may be due to something beyond just what is reported in the conclusions. The issue of multiple interventions extends deep into nutrition research - while you might hear individuals claim that Mediterranean diets slash heart disease risk, you might note that one of the more prominent trials used to back this claim, the PrediMed trial, also gave counseling to the intervention group, but the control received none for most of the trial. Note, also in this trial, the authors refer to the control diet as low fat, and it is in no way low fat, with nearly the same fat intake as the Mediterranean diet (41 vs 37 percent of kcals).

 In more subtle ways, nutrition trials are often not what they claim to be. Back to our low carb vs low fat trials example - when you randomize individuals to receive these treatments, you're essentially telling them to rearrange which macronutrient they get their calories from. Virtually every diet that has employed this strategy has given a low carb and a low fat (essentially, a higher carbohydrate diet) to individuals who are used to eating carbohydrates. I lodged this critique against the most recent low fat versus low carb trial in the Annals of Internal Medicine. Participants in this trial were told to aim for either 40g of carbohydrates per day or less than 30 percent of calories from fat. Both groups had baseline diets that were already high carb diets, with moderate fat intakes. The reality of the treatment seen in this trial is that the low carb arm was told to make drastic changes, reducing 800 calories of their typical consumption (dropping carbs from 240 to 40g), and the low fat arm was told to barely do anything, reduce 100 calories (fat calories went from 34 to 29 percent of calories) of their typical consumption. The interventions with these low carb vs low fat trials are inherently giving different/multiple interventions. Just telling someone to reduce the raw gram of carbohydrate consumed, while telling another to reduce fat to a level that requires a calculation based on typical calorie intake is its own intervention. When you've got multiple co interventions occurring, it's unscientific to state the benefits are due solely to one thing - consider the multiple ways in which nutrition interventions can be interpreted, and note those limitations before giving hard and fast recommendations.

3. Roller-coaster Nutrition - nutrition trials use a number of study designs that need careful consideration when interpreting them. We can do 'addition' trials and attribute the effects of a specific nutrient on some outcome - for example, when we do fish oil supplementation trials, and see clinical benefits, we can attribute the effects back to fish oil. However, in nutrition, we also do a lot of 'swapping' trials, replacing one nutrient or food with another - e.g. when we swap out a serving of whole fat dairy for salmon, not only do we have co-interventions, but we are unable to attribute what the effect was due to - decreasing whole fat dairy or increasing salmon? In nutrition trials that swap out foods/nutrients for each other, we've always got one thing going up and one thing going down = roller-coaster nutrition. Consider the saturated fat/polyunsaturated fat story: multiple trials replaced saturated fat rich food sources with PUFA rich oils and claim benefit for heart health. Is that because saturated are bad, or because PUFA are good? If we had replaced carbohydrate with PUFA, would we have seen benefit? While we don't know this for sure, because there aren't RCTs examining this effect with clinical endpoints, it illustrates the problem of defining 'good' and 'bad' in nutrition, and making recommendations - do we really need to lower our SFA intake or just increase PUFA? These kinds of trials are quite important because, in our ideal healthy population, we recommend isocaloric swaps for weight maintenance - we don't just say reduce saturated fat intake, because this would also lead to a caloric deficit; we need to tell people what to replace these calories with.

4. Baseline levels - Baseline intakes aren't just an issue for low carb vs low fat trials. Whereas with many drugs, you are giving an exogenous material that is quickly cleared from the system, with nutrition, you're often studying something individuals are already eating and, for some nutrients, have some endogenous level of. This leads us to some more complex interpretation of results than might be apparent. Let's take calcium supplements as a starting example. A recent paper in American Family Physician used trials of calcium supplementation to conclude that "Patients need to focus on consuming enough calcium for bone health" is a myth. Indeed, many trials of calcium supplementation have failed to show benefit for bone health. But do we conclude from these trials that calcium isn't good for bone health? I wouldn't. What these authors fail to realize is that there is a baseline level of calcium intake, and that these RCTs of supplementation are just testing whether calcium intakes beyond current consumption levels are necessary. If individuals are consuming diets high enough in calcium regularly, should we expect extra calcium to benefit the bone? Everything we know about calcium homeostasis says 'no'. Calcium is an even more problematic nutrient than some others because of the tight control of plasma calcium levels that prevent us from having a great, specific biomarker of long term intake.

This point of calcium baseline intakes is particularly well illustrated by recent discordant meta analyses of calcium supplementation on fracture risk - depending on whether you include all of the Women's Health Initiative data, you can find different results. The WHI is notorious for being a huge trial, that makes up a large percentage of the individuals included in these meta-analyses, and enrolled women who were already consuming baseline high levels of calcium, through diet and other supplements. The extra supplement, for many women, took their already adequate calcium intakes and created an excessive calcium intake. A recent meta analysis that took only a subset of the participants without high baseline calcium intake from supplement use found a significant benefit of calcium on fracture risk (though doing so violated randomization and opens us up to other biases). Considering all of the WHI data, one could argue that this trial truly test whether excess nutrients are beneficial, and not surprisingly, there's a resounding no. Randomization is great for reducing factors that might confound results of a trial, but if we end up giving sizable doses of supplements to a mix of replete, sub-optimal and mildly-deficient status individuals, should we expect any magnanimous benefit to shine through the statistics? When reviewing RCTs, be sure to check and see if the investigators considered baseline levels of nutrient intake, and consider whether the supplement is testing benefits of physiological or super-physiological levels of a nutrient.

5. Efficacy and Adherence - The truer gold standard in nutrition research are metabolic ward trials. In these trials, we essentially trap people and prepare all of the food, weigh everything out, feed the individuals, weight exactly whats not eaten, and can precisely calculate nutrient intakes. Unfortunately, few people are truly willing to lock themselves up for too long,so these trials are limited in duration (and many admit that they're not the goldest standard, because they miss out on real world interactions). Because of their lack of feasibility, this leaves us giving dietary interventions to free-living individuals, which presents us with a couple new problems. First, did individuals adhere to the dietary regimen? And second (arguably more problematic), how can investigators/readers tell that they adherent? Adherence is a huge issue in dietary trials, and finding an objective biomarker to tell whether participants were is difficult (though I'm hopeful metabolomics will help us out here). Take a look at virtually any trial where weight loss is the primary outcome, and you'll see a nice drop in the beginning of the trial when the treatment is relatively new (if it is new... #lowfattrials), and then a tapering off of the effect. This has been an argument lodged against the numerous low glycemic index diet trials that have failed to show an effect (though the GI has issues far beyond adherence IMO)- regardless, if people don't adhere, we shouldn't make too strong of conclusions regarding the efficacy of that dietary regimen. While some may lodge these critiques, and often they are likely valid (like when people report eating crazy low calories but only lose a pound two), we often have a difficult time discerning how good adherence was (what is the 'evidence based' interpretation when you're not sure if participants actually did something?). Many investigators will report that dietary adherence was good, but without objective measures, we must take this evidence with a grain of salt. It's imperative that those reading trials be familiar with methods of determining adherence to a dietary regimen (e.g. were blood ketones measured on a very low carb diet, did erythrocyte fatty acids change with a fish oil trial, etc). Not all interventions have reliable, objective measures, and we are left to believe (or not believe) self-reported dietary intakes. When critically appraising a RCT, consider this issue of adherence, because drawing strong conclusions about a particular therapy that few followed can lead to highly erroneous conclusions (and very bad meta-analyses).

6. Interactions -  Nutrients often interact. Anyone who has taken biochemistry and studied 1-carbon metabolism knows this (you've got a bunch of b-vitamins and amino acids all hanging out together). A common example I like to use to illustrate this principle is calcium: we could do multiple calcium intervention trials, meta analyze them, and come up with really faulty conclusions, if we don't take into account the vitamin D status of the participants. Nutrient 'synergy' is found all throughout the literature (though sometimes it's used to speak magically about whole foods and not scientifically to discuss interactions). There are classic discussions talking about the relative levels of n-6 and n-3 fatty acids, and the importance of ratios vs absolute quantities of short and long chain fats. When considering interventions that attempt to isolate one thing, consider how that nutrient interacts with other aspects of the diet before drawing strong conclusions.

7. Specified Outcomes - When you're reading a clinical trial, it's very important to go back and determine what the trial was actually designed for. Trials should be registered at clinicaltrials.gov. Make sure that what is being reported on in the conclusions is actually what the trial was designed to look at - if not, consider the statistics and the power that the researchers have to make these kinds of conclusions. As we learned from the chocolate and weight loss study, it's super easy to measure a ton of biomarkers/anthropometrics after an intervention, and find some chance association between the intervention and a change in a biomarker. Pre-specification of the hypothesis is absolutely essential for clinical trials, and when a finding is reported that the trial wasn't designed for, take this as simply a hypothesis generating conclusion, not necessarily a true finding.

8. Adjustment Period - One thing that I increasingly look for is the use of a lead-in diet in controlled trials - essentially, researchers put everyone on the same diet for some amount of time before randomizing to the experimental diet or control. This calibration period allows for changes that occur from just being in a trial to occur before the experimental period - a control group and a lead-in diet would've really helped out Dr Lustig's recent sugar trial. Note, however, that there is significant debate over how long these lead-in diets need to be,  and will likely depend on the outcome measured.

While these 8 points illustrate some of the complexities of nutrition research, I would note that they don't degrade the importance of RCTs. Observational evidence is also replete with limitations, arguably more. As more and more individuals become interested in nutrition research, it's imperative that they are taught how to appropriately critique the fields' methods and draw accurate conclusions from studies, noting ambiguity when it exists.

Comments

Popular posts from this blog

Beware the Meta-Analysis: Fat, Guidelines, and Biases

Headlines were abuzz this week, reporting that a new review of randomized controlled trials at the time of the low-fat guidelines didn't support their institution. Time , Business Insider , and The Verge all covered the topic with sensationalist headlines (e.g. 'We should never have told people to stop eating fat' #weneverdid). I won't spend every part of this blog picking apart the entire meta-analysis; you can read it over at the open access journal, BMJ Open Heart (1) -- (note, for myself, i'm adding an extra level of skepticism for anything that gets published in this journal). I'm also not going to defend low-fat diets either, but rather, use this meta-analysis to point out some critical shortcomings in nutritional sciences research, and note that we should be wary of meta-analyses when it comes to diet trials. First off, let's discuss randomized controlled trials (RCTs). They are considered the gold standard in biomedical research; in the hierarc

On PURE

The PURE macronutrients studies were published in the Lancet journals today and the headlines / commentaries are reminding us that everything we thought we think we were told we knew about nutrition is wrong/misguided, etc. Below is my non-epidemiologist's run down of what happened in PURE. A couple papers came out related to PURE, but the one causing the most buzz is the relationship of the macronutrients to mortality. With a median follow up of 7.4 years, 5796 people died and 4784 had a major cardiovascular event (stroke, MCI). The paper modeled the impacts of self reported dietary carbohydrate, total fat, protein, monounsaturated (MUFA), saturated (SFA), and polyunsaturated (PUFA) fatty acid intakes on cardiovascular (CVD), non-CVD and total mortality; all macros were represented as a percentage of total self reported energy intakes and reported/analyzed in quintiles (energy intakes between 500-5000kcals/day were considered plausible..). All dietary data was determined by a

Nutrition Recommendations Constantly Change...Don't They?

I was on Facebook the other day, and someone in a group I'm in made a statement about not being sure whether to eat dairy, because "one week its bad, and the next its good". This is something I hear all too often from people: nutrition is complex, confusing, and constantly changing. One week 'X' is bad, the next 'X' is good. From an outsider's perspective, nutrition seems like a battlefield - low fat vs low carb vs Mediterranean vs Paleo vs Veg*n. Google any of these diets and you'll find plenty of websites saying that the government advice is wrong and they've got the perfect diet, the solution to all of your chronic woes, guarantee'ing weight loss, muscle growth, longevity, etc. Basically, if you've got an ailment, 'X' diet is the cure. I can certainly see this as being overwhelming from a non-scientist/dietitian perspective. Nutrition is confusing...right? Screenshot, DGA: 1980, health.gov From an insider's pe