Making “Low Carb” A Murderer – Part 2 of 2 – Broken from the start

Broken from the start

(This is part 2 of a 2 part series examining a study posted in The Lancet in August 2018 suggesting that low carb diets will increase your chance of dying early. This series will highlight the conduct surrounding the study by the researchers and those who reported on it in the media [Part 1], as well as the technical limitations and flaws of study itself [Part 2].) 

In Part 1, we investigated the social and scientific impact of observational studies and the scientists who report them. Specifically, we looked at this study published by The Lancet Public Health: Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis 

The key takeaways were:

  • The Lancet Study is in no way relevant to any modern low carb diet (ie. What the world considers a low carb diet or a ketogenic diet)
  • The review process for submitting scientific papers to journals leaves wiggle room for legitimate papers to have a mix of fact and author interpretation of data, as long as the author uses the correct language to distinguish the two. Sometimes this can lead to both misinterpretation of theory as fact by readers (including media outlets).
  • Scientists (researchers in general) are humans, at the end of the day, with  career, personal and team based motivations. Whether they believe they are doing the right thing or not, their motivations might add an unwarranted bias into scientific literature.
  • Everyone should learn to examine studies from various angles in order to come to their own conclusion. Don’t just listen to MY interpretation of it. Try to consider potential motivations of the authors, spot misleading information and behavior, acknowledge your own own biases as a reader, and maybe even try your hand at analyzing the technical details of a study.

In part 2, we will be doing just that. We will critically look at how the study was designed and how the author’s interpreted the data. Here are some of the points we will touch upon:

  • Review of what an epidemiology study’s actual purpose is
  • Where the Lancet study’s data came from
  • Whether or not their data is reliable
  • The supposed impracticality, suggested by the author, of directly studying low carb diets against mortality
  • The real long-term effects of low carb diets in contrast to all other diets

Observational Epidemiology

We went over it in Part 1, but it’s important to understand what an observational study is and how it relates to this epidemiology study. The World Health Organization defines epidemiology as such:

Epidemiology is the study of the distribution and determinants of health-related states or events (including disease), and the application of this study to the control of diseases and other health problems. Various methods can be used to carry out epidemiological investigations: surveillance and descriptive studies can be used to study distribution; analytical studies are used to study determinants.”

An observational study, in the context of health, is one that looks at an existing (or preexisting) population and tries to spot patterns within them. Those patterns should serve as a basis for a hypothesis, which should then be taken by the scientific community and tested, via experiments and trials, to see if the hypothesis holds up.

In school, you probably learned “The Scientific Method”, which states that a good scientists in one way or another follows this behavioral pattern, in this order:

  1. Observe
  2. Create a hypothesis
  3. Test with experiments
  4. Analyze and report the results
  5. Repeat with new information

This is a simple philosophical interpretation of how good science is conducted. The real world is a bit more abstract than that. But in general, this is how a good scientist does and should think. You can look at observational studies as a steps 1 and 2. Observe and hypothesize.

This Lancet study, an observational study, should act as the observation and hypothesis-generation of a theory related to low carbohydrate intake and mortality. Any conclusion and reporting should stay strictly within the scope and goal of generating a reasonable hypothesis. Nobody should be speaking about observational studies as if they’ve proven their own hypothesis. More importantly, the study should not directly warrant anyone to recommend courses of action & dole out advice to the public.

Who, what, where, when and why…

To review, according to the authors, the study had a duel purpose: To investigate the association between carb intake and mortality, and to investigate whether replacing carbohydrates with plant-based or animal-based fat and protein would expose a difference in mortality.

In an attempt to keep too much of the nitty gritty detail of the study design to a minimum, I’ll only go over the main points of how the study was setup.

In scientific literature, a meta-analysis, simply put, is the an assessment of various studies, their data, and their findings, within a particular perspective that the meta-analysis chooses to focus upon. Their purposes can range from further validating an existing theory to discovering trends not found in any one of the selected studies rather only apparent when the studies are combined. Often times, they are used to derive different interpretations, and thus spawn new hypotheses that can be tested.

The researchers gathered their base data from a dietary questionnaire (more about dietary questionnaires later) taken by participants of a study that began in 1987, the Atherosclerosis Risk in Communities (ARIC) study. The participants of the ARIC study were from four distinct communities in the US: Forsyth County, NC; Jackson, MS; suburbs of Minneapolis, MN; and Washington County, MD.

Additionally, the researchers created a set of additional criteria that would allow them to effectively examine the focus of the study. With that criteria in hand, they scoured existing data from older studies and meta-analyses (see sidebar for information on meta-analyses). Data from seven other studies, which included carbohydrate intake data, was combined into that ARIC data. They then gathered mortality statistics by searching various databases including hospital and state health department records. From there, they had all of the information they needed to begin their analysis and find relevant correlative patterns.

Let’s take a step back and look at the data from the ARIC study. How was that gathered? As part of the intake process of the ARIC study, which occurred between 1987 and 1989, participants were interviewed and asked to fill out a food frequency questionnaire (FFQ). This questionnaire had a list of 66 foods, and participants were tasked with answer how frequently they ate those foods. From there, their average intake of macronutrients and calories was derived. Just a couple years prior (1985), one of the same authors of the Lancet study (Walter Willet) co-authored a study attempting to validate the relative validity of similar FFQs. Again, more on this later. One bit from that study is rather alarming, which we will touch upon later in this article.

The great divide

The researchers of the Lancet study then sorted all of the ARIC participants by how much carbohydrate they ate, from least percentage of intake to most. The participant, based on that sorting, were then equally divided into five equal groups.  They didn’t define minimum and maximum thresholds of any data point in order to logically determine who should be in what study group. They simply took all of the participants and divided by 5. Thus, the study groups were called “quantiles.” Each quantile consisted of 3086 people. What they ended up with were the following groups:

Quantile Average % of energy from carbohydrate consumed
Q1 (Lowest Carb) 37% +/-5.7%
Q2 44% +/-2.5%
Q3 49% +/-2.2%
Q4 53% +/-2.8%
Q5 (Highest Carb) 61% +/-6.3%

Here is the full data table for reference. It really is worth it to read the entire table and see what patterns you can spot (some I won’t get into for the sake of time here).

ARIC Study Data Table

Click to view full size

The Findings

The after examining the data, authors interpreted their data. Their assessment was that based on who died in each group, the lowest and highest carb eaters were at the highest risk of early mortality. By combining the rest of the data from the other studies, they were able to derive a hypothesis given their other question: whether or not reported intake of plant-based vs animal-based correlated with mortality. What they found was that there were statistically significant differences when looking at the data. What they concluded was that people who switched carbs for plants rather than animal fats and proteins generally faired better. Notice the distinction I made there: “The data suggests” vs “People who changed.” Data and what people actually do are never exactly the same. In fact… sometimes they are very far off. Worth repeating.

Data and what people actually do are never exactly the same.

“I know what you ate last summer”

(*crickets…* on that reference, I’m sure.)

Let’s examine these Food Frequency Questionnaires, shall we? Let me start off by asking you what you ate last month? Last week? How confident are you in your answer?

Food frequency questionnaires are controversial because they rely on participant recall. The longer you wait to ask, the more people misreport. Simply forgetting, lying due to shame, lying to one’s own self, misreporting and checking the box of what sounds healthy and what the participant thinks is the “right” answer that the researcher wants… these are all real factors.

Thịt Kho deserves a shoutout. It can be made keto with a bit of tweaking. So good! (Picture from All Recipes)

Thịt Kho deserves a shoutout. It can be made keto with a bit of tweaking. So good! (Picture from All Recipes)

FFQ’s also have the limitation of not including all foods that one might eat. FFQs are typically designed for the specific diet population and desired datasets. For example, if you are examining how much fish oil people consume in Japan, your FFQ might include lots of fish. The Lancet study’s base data was from four US communities, within which were varying ethnicities. America is a melting pot, after-all. I’m sure burgers were included in the FFQ, but I doubt that it included Thịt Kho, one of my favorite (and frequently eaten) home made dishes growing up in a Vietnamese household. I am very much a US citizen. Starting to see the potential flaws here?

I noted earlier that Willet WC was an author on a 1985 study that examined the reproducibility and validity of FFQs. Here is a quote from the Lancet study:

“Participants completed an interview that included a 66-item semi-quantitative food frequency questionnaire (FFQ), modified from a 61-item FFQ designed and validated by Willett and colleagues,16 at Visit 1 (1987–89) and Visit 3 (1993–95)”

Here is an excerpt from Willet’s 1985 Abstract:

“With the exception of sucrose and total carbohydrate, nutrient intakes from the diet records tended to correlate more strongly with those computed from the questionnaire after adjustment for total caloric intake.”

This is how I interpret these pieces, in context. The Lancet authors recognized the controversial nature of FFQs. In order to minimize skepticism, they threw in a note that the method had been validated before, by referencing to Willet’s 1985 study. It is one of the few studies that directly look at the quality of FFQs and report generally positive outcomes.

Additionally, Willet’s abstract even mentions that sucrose (sugar) and total carbohydrate were the only data points that didn’t seem to improve in accuracy when they “adjust” for total caloric intake. In other words, if they normalized total caloric intake (probably by scaling all individual participant data, not just a single data type, to the point where every participant has close to the same caloric intake on paper), the data became seemingly more accurate. Well, more accurate except for the carbohydrate data. We don’t know if carb data accuracy just didn’t change, or maybe it changed for the worse. But either way, it is means for speculation of FFQ validity when it comes to carbohydrate.  The rest of the study isn’t public access, so I can’t assess Willet et al.’s exact methods.

When adjusted for factors including total caloric intake, the data suggests that low carb is associated with mortality risk.

When adjusted for factors including total caloric intake, the data suggests that low carb is associated with mortality risk.

With that said, it’s worth noting that the Lancet study (via ARIC) used caloric adjustment, referenced as “total energy consumption,” when making a correlation between mortality hazard and carbohydrate consumption.

You’re probably asking, “But wait – isn’t the the Lancet study, more or less, all about carbohydrate?” Exactly. You’ve asked the right question, indeed. How did someone who co-authored both studies recognize the potential pitfalls of measuring carbohydrate using FFQs then, in good conscience, go on to say that the method is valid for this Lancet study?

I digress. Since that FFQ validation study, more recent assessments have not shown such favorable FFQ performance. One study even directly looked at how accurate FFQs were in assessing total protein intake, and concluded that:

“Because of severe attenuation, the FFQ cannot be recommended as an instrument for evaluating relations between absolute intake of energy or protein and disease.”

You’re probably now thinking, “But wait – isn’t the other part of the study all about protein and fat source?” Yep.

2014 paper, published in American Journal of Epidemiology, critiqued various dietary assessment methods. They outlined the controversial nature of FFQs and cited opposing views of its validity. Some of the references were editorials and rebuttals in a debate between Willet and other scientists. The paper also highlighted some of the participant recall issues as being one of the major limitations of FFQs.

The point I’m making is, the researchers of the Lancet study can claim the validity of the FFQ data, but its suspect at its core. There have been recent attempts to improve FFQ quality via advanced programs and algorithms, but the Lancet data was procured in the late 1980’s (not so recent).

Apparently, everyone is skinny and we just need glasses

We’ve determined that the FFQ data in general is suspect. To further make a case against the validity of the data, let’s look at how some of numbers are unbelievable. Not a hyperbole; I quite literally can’t believe them.

Looking at the table of the five quantiles, the Lancet study calculated the following average daily caloric intake from the FFQs.

Quantile Average % of energy from carbohydrate consumed
Q1 (Lowest Carb) 1558
Q2 1655
Q3 1660
Q4 1646
Q5 (Highest Carb) 1607
Body Mass Index (BMI) - a minimalistic score derived simply from an individuals weight and height, not accounting for actual body composition of lean and fatty tissue

Almost anyone, likely including these researchers themselves, can agree that these numbers are low. The majority of people would likely lose weight on this much of a caloric deficit. It might be low quality weight-loss (muscle vs fat), but either way everyone should be losing weight. These caloric intake values don’t seem realistic when considering that fact that this data was collected from a few specific populations by locationNOT people who were split based on whether or not they were following specific diets (ie. not generally considered “dieting” or being “on a diet”). Looking at these numbers, you might assume that the participant selection required that the participants be cutting calories! Yet, based on the body mass index (BMI) values, they all got heavier over time. Across all five groups, the change in BMI was consistent. People should have exited this study skinnier, not heavier!

Changes in BMI over 3 and 6 years as reported by the researchers

Changes in BMI over 3 and 6 years as reported by the researchers

Also, it’s generally agreed that it’s typically easier to eat too many calories of fat than it is of carbs, while on a standard diet. Fat is much denser. It looks smaller on a plate and is over double the amount of calories per unit of weight compared to protein and carbohydrates. It’s generally understood that fatty foods have lots of calories compared to leaner foods. People’s stomaches don’t feel “full” and “stretched” because of calories consumed, but primarily due to volume consumed. Is it easy to believe that the groups that ate the least carbs and the most fat ate the least amount of calories?  (1558 calories in Q1 vs 1646 and 1607 calories in Q4 and Q5 respectively) I find that difficult to believe.

How much do Americans actually eat? It’s not so simple

I did a little digging around to see how much people actually eat (or as close as we can estimate). In an upcoming article, I’ll outline my investigation into the matter. The answer isn’t so simple and warrants an in depth investigation. However, in general, this is what I found.

  1. Obesity Rates Over Time

    Obesity Rates Over Time

    Nobody really knows, but nobody thinks it’s under 2000 calories a day. The USDA published an age & activity dependent recommendation on caloric needs (how much to eat to maintain weight) here. Keep in mind, over the decades that the study data was collected, obesity rates increased quite a bit.

  2. The closest we get to an answer are from 24 hour recall interviews (not FFQs, but actually sitting with someone and helping them recall their food over the last 24 hours) conducted by The National Health and Nutrition Examination Survey. The results are published by the USDA. Here is a sample from 2005-2006 that calculated the following: men in their 40s ate around 2753 calories per day; women in their 40s ate around 1873 calories. If you averaged the two (the Lancet study had roughly 1:1 male to female ratio), it’s closer to 2300 calories a day.
  3. Digging around in some other data which I’ll publish later, I found that the number is probably somewhere between 2300 to 2400 on average as well.

Takeaway: realistic average American caloric intake during the course of the study is a far cry from 1558 calories.

So ask yourself, do you believe that the FFQ worked? Not that this study has anything to do with modern low carb diets anyway… but if it did, would you trust the data?

If the starting data is wrong, does anything that come after it really matter? Do the findings have much integrity?

A Very Vicious Cycle

Consider this: this is a study that used various other studies’ existing data (including other studies conducted by author’s of this very study – if that makes sense). Using existing data is not, in and of itself, something that is frowned upon. However, when implementing this methodology, it is easy to cherry pick your data sources and leave out the study’s that don’t support your hypothesis.

Many, if not all (at least the ones I dug into), of the studies referenced in the Lancet one, as well as the data that they combined to come to their conclusions, aren’t randomized trials. They are, as you may have guessed by now, observational studies.

Much like a game of telephone, how valid can observational studies if they primarily use data from older observational studies? How much less valid if the first study’s data was wrong to begin with?

Who Am I?!

Looking back to the ARIC participant data and the five quantiles. Did you noticed significant demographic and behavioral patterns between the two groups of extremes (low vs high carb)? Here is what the study mentions about the low-carb group:

“Participants who consumed a relatively low percentage of total energy from carbohydrates (ie, participants in the lowest quantiles) were more likely to be young, male, a self- reported race other than black, college graduates, have high body-mass index, exercise less during leisure time, have high household income, smoke cigarettes, and have diabetes.”

The big question should be, “do any of the groups in this study really represent modern low-carb dieters?” If you are on low-carb yourself, you should be asking “does this study represent me in any way?”

Aside from all of these higher level questions, from a hard data perspective as mentioned in Part 1, 37% of energy from carbs is not even close to what society considers a low carb diet. A ketogenic diet is closer to 5% energy from carbs.

If you are dieting for weight and health issues, you are likely “trying to take care of yourself.” Does it look like the participants that got divided into quantile 1 would be someone who would say, “I’m trying to take care of myself?” They tended to be smokers, probably had high-stress jobs, and were more likely to have had weight issues and diabetes to begin with. How about quantile 5, during decades where people absolutely “knew” that eating less fat and more grains was the healthy thing to do?

To me, it looks like Q1 was sick to begin with and cared less about overall health – diet included. They clearly, as a whole, weren’t on a low carb diet. To me, Q1 represents a group of people who ate whatever was available, not whatever fit whatever diet plan they subscribed to. Q5, I speculate, is a group with a higher number of “health nuts” trying to “take care of themselves.”

Who do you think would live longer? Again, where do you fit into the picture, if anywhere. If you’ve read this far, I doubt you identify with quantile 1.

I rest my case

I will bring this full circle back to the idea that the author’s have tried to associate “low carb” with what regular people actually consider “low carb” one last time. Here is another example of the authors attempting to “sell” this association with popular low carb:

“In practice, however, low carbohydrate diets that exchange carbohydrates for a greater intake of protein or fat have gained substantial popularity because of their ability to induce short-term weight loss, despite incomplete and conflicting data regarding their long-term effects on health outcomes.”

In that statement, they cite four (1, 2, 3, 4) other studies that they interpret as conflicting in regard to low-carb being a healthy lifestyle. They try to link negative health effects with whatever they are calling “low carb.” Can you guess what kinds of studies those were? Probably not the types of studies that we’d call an experiment/clinical trial. Probably nothing like this actual randomized trial (NOT an observational study – rather an actual experiment), published in the Journal of American Medical Association in 2007, that showed favorable changes across all of the health markers in the lowest carb group, including those that have been studied for mortality risk as well.

I encourage you to look into those studies that they cited as “conflicting”. See what you find. For example, did they use FFQs? And who was listed in the study? Could it be someone who really likes FFQs? (Hint, It’s Willet) (And that study is about carbs and mortality – sound familiar? Is there already bias coming into the Lancet study from the onset?)

The statement I quoted from the Lancet study (in Part 1) mentioning that directly studying all cause mortality for low carb diets via randomized trial being impractical is, technically, true. You’d have to randomize a population, change their diets, make sure they stay on their assigned diet, and follow them for as long as it takes for them dying off in order to tally up the data. However, there are varying indirect ways to use trials to get closer to the truth. What we should NOT do is allow statements like that to discourage us from testing their hypotheses in whatever way we can. We should NOT act upon this data and change our diets in response.

There are plenty of other issues with the data as well as questionable word choices and references used by the authors in this study. However, this article’s length is a doozy already. If you’ve read this far, you deserve and (almond flour) cookie.

A *hop* of faith… from theory to fact.

I can’t tell you the number of times I’ve had conversations with people who mention that they are afraid to accept a low-carb ketogenic lifestyle. The reason they give is that there isn’t long-term data on it. Despite being convinced by all of the shorter term trials and modern experiments that show direct improvements to so many health markers, they still hold on to the idea that the long-term unknowns are too risky. I hope this investigation has made you agree that long-term observational study data should ABSOLUTELY NEVER be weighted above actual experimental data. It should be the other way around. The simple fact is that observational study data is no proof at all, no matter how long-term it was.  The ONLY long-term data across ALL diets that we have is of observational study. Thus the argument that ketogenic diets are risky because we don’t have long term data on them is invalidated by the fact that long term observational data doesn’t prove, nor should they try to prove, anything beyond the idea that “there might be something to this.”. We do, however, know what happens when you are part of a population that doesn’t subscribe to a diet at all or diets using USDA recommendations. Just, look at America. Actually, look at the whole westernized world.

So you tell me, what’s riskier? Trusting the randomized trials favoring a ketogenic lifestyle and making a change? Or changing nothing because of the existence of questionable long-term observational studies and lack of long-term observational ketogenic studies?

A leap of faith is too far from being scientific. Nothing is ever 100% proven, but when enough evidence arises, we take a tiny hop from theory to fact. The evidence in support of ketogenic diets is there when the experiments are done. Maybe it’s time, as a society, to even consider taking a tiny “hop” of faith.

 

 

Feel free to discuss all things research with me (and each other) in the comments. I’ll be adding small tidbits I left out in the comments as bonus talking points.

Yes, I am indeed the asian with the cheese bowl. I am also a huge nerd and love science. My real job as the co-founder and technical director for Inphantry keeps my nerdiness factor at a record high. In my spare time, I am obsessive about diet and nutrition. Maybe even too obsessive... Keep reading and I'm sure you will pick up little bits of who I am along the way!

2 Comments on "Making “Low Carb” A Murderer – Part 2 of 2 – Broken from the start"

  1. Derek Tran says:

    Here is one of those that I decided to exclude from the main report. The study mentioned: “We did not update carbohydrate exposures of participants that developed heart disease, diabetes, and stroke before Visit 3, to reduce potential confounding from changes in diet that could arise from the diagnosis of these diseases.”

    Visit #3 was a follow up questionnaire for all surviving participants between 1993–95.

    Some people mistakenly read this as “People who developed disease before visit #3 were removed from the study.” For those that don’t agree with this study, this point was especially mistakenly misunderstood likely do to the unconscious power of bias. What they researchers did probably wasn’t quite that bad.

    How it should be read is this. At visit #1, they collected their initial data using FFQs. On visit 2, they collected additional data and averaged the data out in an attempt to improve accuracy.

    Finally, on visit 3, if the participants were diagnosed with any of those issues between then and visit #1, their data was frozen in time. It’s not explicitly noted what happened to that participant data afterwards. Was it thrown out completely? Their statement doesn’t seem to suggest that.

    So what likely happened to that data? They assumed that for the next couple of decades, those people stayed on the same diet that made them sick. They were locked in as whatever they were for visits 1 and 2. Let’s examine a few reasons why this may have skewed the data.

    Situation 1: A participant from Visit 1 and 2 is categorized as a high carb participant. They are then diagnosed with diabetes by visit 3, thus are locked into the high carb group and their data is frozen. They change their diet and lower their carbs (what would have made them low carb by the end of the study) and survive the study. Who gets the point? Team high carb (scored by someone who should have been team low carb).

    Situation 2: A participant is in the low carb group (remember, that group tended to already have health issues) gets OFFICIALLY diagnosed with heart disease. Due to USDA recommendations and the downward spiral of agriculture moving towards corn and grains, the participant inadvertently over time eats more and more carbs. He would have been in the high carb group by the end. But due to the “lockout of updates” he is considered low carb. He dies before the study is over. Who gets a negative point? Team low carb. Thus high carb is winning.

    …that is, of course, if you subscribe the the much more likely (and tried) idea that intentionally lowering carbohydrate intake actually favorably improves your health…

    You could say the same thing the other way around if you believed that low carb actually does kill you.

  2. Eric Cameron says:

    So well written, once again. You’ve touched on all of the important discussion points Derek. Glad we’ve got you in the corner of the ketogenic diet to help people see the truth.

Leave a Reply to Derek Tran

Login to comment with:

Or just go ahead and comment without logging in: