Tuesday, October 28, 2008

Book Review: The Jungle Effect, by Daphne Miller, M.D.

Our local library recently began advertising a talk by Dr. Daphne Miller, author of The Jungle Effect. The essential concept of the book is to examine epidemiological "cold spots" for various modern diseases, such as diabetes, heart disease, depression, and cancer. These cold spots are areas notably low in the incidence of said diseases. Dr. Miller visited each area and studied the local cuisines, with the idea that food is a driving force behind development of these diseases which often show the highest incidence amongst those eating a modern Western diet.

I was quite excited when I first read about The Jungle Effect. One reviewer went so far as to dub Dr. Miller the modern equivalent of Weston Price. I'm a big fan of Price's work as a shining example of the application of the scientific method and what can be discovered with limited resources and a determined rational mind. I also believe cultural wisdom that has stood the test of time deserves to be weighed along with more "scientific" evidence, and of course am in favor of whole, nutrient-dense foods (who isn't?). As I read The Jungle Effect, however, my enthusiasm waned and frustration set in. While I do believe that the foods put forth by Dr. Miller would constitute a much healthier diet than eaten by most in modernized society, the scientific rationale falls well short, and I believe leads to confusion and complication that is both unnecessary and unjustified.

Let's start with the good. First and foremost, Dr. Miller is an excellent writer, and clearly passionate about helping people improve their health. The Jungle Effect covers five different traditional cuisines associated with disease cold spots:
  • Copper Canyon, Mexico (the Tarahumara): Diabetes
  • Crete: Heart Disease
  • Iceland: Depression
  • Cameroon: Bowel Trouble
  • Okinawa: Breast and Prostate Cancer
Dr. Miller's descriptions of these cultures and their cuisines are vivid and fascinating. Many of the recipes sound very tasty, and despite the often relatively high carbohydrate content (often from honey, sugar, or maple syrup) are probably considerably more healthy than the average Western diet. Indeed, the patients for which she "prescribed" these diets apparently saw positive results, e.g. in reducing blood sugar, blood pressure, weight, depression symptoms, etc. So there is clearly some upside here, at least compared with the diet and health of the average American.

But in the end, The Jungle Effect suffers from some fundamental flaws. My criticisms here are meant to be constructive. As noted in the last post, sharing information is important if we hope to understand and sort out differences in our beliefs. Dr. Miller's approach is "traditional" not only in searching out elements of indigenous cuisines, but also in adhering to the nutritional orthodoxy. She obviously received the usual medical training, and consulted a nutritionist while writing the book. But many of you reading this realize that widely-held beliefs about the health effects of various foods are constructed on evidence that is weak at best, with contradictory evidence often being ignored. This results in some clear cognitive dissonance. For instance, Dr. Miller briefly discusses the Inuit (Eskimos) and the high level of health maintained on their traditional diet consisting almost entirely of high-fat meat. Yet much of the rest of the book warns against the dangers of red meat and too much fat, particularly saturated fat. I suspect there's also some confirmation bias at work in her selection of which cultures to study. One wonders why she didn't visit the Inuit, Masai, or for that matter the Namgis tribe featured in the excellent documentary My Big Fat Diet. The latter case is particularly interesting, as the Namgis who returned to a traditional high-fat diet experienced rapid and major health improvements, considerably more dramatic than those described for Dr. Miller's patients. Evidence modifies beliefs, but our beliefs should not cause us to filter the evidence.

Dr. Miller does briefly attempt to explain away the apparent Inuit "paradox" by noting the wild animals traditionally eaten themselves eat nutrient dense food, and pass that nutrition along to predators. But then the best one could say is that it isn't meat per se that is unhealthy, just meat fed an unnatural diet (Dr. Miller generally recommends very low meat intake, particularly red meat, usually without distinguishing the source). Give her credit for singing the praises of nutrient-dense organ meats. She also gets some points for not invoking genetics as the root of the apparent paradox. If you ever feel the urge to do this, then you should undertake the following procedure:
  1. Find a friend with some heavy boots.
  2. Have them kick you in the backside.
  3. After each kick, say the following: "I will not blame the observed failure of my hypothesis on unobserved genetic factors."
  4. Repeat 2-4 until rationality sets in.
The Jungle Effect would have benefited from deeper critical thinking, rather than absolute reliance on consensus (see Galileo quote at the top of the page). One aspect of this would be a broader sample of indigenous cuisines, including those high in animal products. It would also have been nice to see some integrative analysis of the different diets. Weston Price studied a wide variety of indigenous cultures, and was able to distill some common nutritional factors resulting in robust health, even so far as to discover a new fat soluble vitamin (which he called Activator X, now thought to be Vitamin K2). I would like to know, for example, how the health of the various cultures compared across the spectrum of diseases. Cameroon may have low incidence of colon cancer, but how do their rates of diabetes, heart disease, etc. compare to the other cultures described in the book? For that matter, one would like to compare traditional Inuit on that basis as well, to include the full spectrum of dietary macronutrient compositions.

Indeed, Dr. Miller notes that the Tarahumara, while notably free of diabetes, are not particularly healthy otherwise. Following Weston Price's cue, I looked for information on Tarahumara dental health, the idea being that dental health is a reflection of overall health, certainly of the status of vitamins involved in immune support and mineral metabolism. I'd be interested in getting Dr. Miller's view, since she was on the spot and got to observe the Tarahumara. I did find this article, which implies that the traditional Tarahumara suffer from significant dental disease, though it's pretty thin on details. We might also compare visually them with a hunter-gatherer. Here's a photo of an indigenous Tarahumaran, who according to Dr. Miller subsists largely on corn, beans, squash, and relatively little meat. Compare with a Kalapalo tribesman, who eats a lot of fish (they don't hunt animals) and jungle fruits and vegetables. Draw your own conclusions. Personally, I wish I had some Kalapalo biceps.

The nutritional context could also have been expanded in time. Modern indigenous diets do not necessarily represent the evolutionary diet of humans. Those sampled by Dr. Miller all relied heavily on agriculture, yet those foods available through agriculture have been part of the human diet for a very short time, evolutionarily speaking. The paleo-anthropological evidence certainly indicates that the pre-agricultural diet often relied heavily on meat from large mammals, and there are some clear markers of health decline at the agricultural boundary (decreased skeletal stature and skull size, evidence of mineral deficiencies, tooth decay). The modern indigenous diets likely evolved from other influences beyond evolution: extinction of large prey animals, geopolitical forces, changing climate, etc. Over time, these cultures may have identified the healthiest combinations of whatever foods were available, but that doesn't mean the available foods are the healthiest as defined by human evolutionary heritage. So again, it's necessary to consider all available evidence, not just that which agrees with our preconceptions.

Let's discuss some of the key nutritional beliefs that underpin Dr. Miller's arguments. Despite being widely held, a critical examination shows that the actual scientific evidence for many of these beliefs is thin at best, and often contradictory. Gary Taubes' Good Calories, Bad Calories (GCBC) gives a broad critical examination of the evidence, so there's no need to go into depth here (I suspect Dr. Miller has not read GCBC - if I can catch up with her at the talk this Sunday, I'll offer her a copy). Start with saturated fat. Obviously it is generally thought that saturated fat is a causal factor in many diseases, from diabetes to cancer to heart disease. Dr. Miller cites Ancel Keys work as evidence for this, but the problems with Keys' studies are well documented in GCBC and in many other places. The biggest one, of course, is that epidemiological research such as the Seven Countries Study can only show statistical associations, not causation, and are susceptible to counfounding from the large number of uncontrolled variables. Other epidemiological studies (like Framingham) have shown the opposite of Keys' conclusions, and to my knowledge there are few (if any) controlled studies that illustrate any causal connection between saturated fat and disease. Stephan is starting a new series at Whole Health Blog on this topic, which I recommend.

Apart from experimental evidence, one would also like to have a plausible mechanism by which saturated fat causes disease. As I discussed here, despite a half-decade of research, nobody has any idea how it is that saturated fat leads to heart disease. "Experts" continue to pound on the health evils of saturated fat, despite evidence that is weak at best and generally contradictory. Dr. Miller calls out saturated fat as a pro-oxidant food "known to cause oxidative stress". Again, we see uncritical acceptance of the consensus. Saturated fats are LESS susceptible to oxidation than unsaturated fats, particularly polyunsaturated fats from vegetable oils, flax, etc. From simple chemistry, one expects polyunsaturates to induce substantially more oxidative stress (credit Dr. Miller with recommending limited vegetable oil intake, though I don't agree with her recommendations for taking flax oil). And of course the human body preferentially manufactures saturated fat from excess sugars, likely an evolutionary response. Were saturated fat to actually increase oxidative stress, one would have to hypothesize some mechanism by which the body preferentially oxidizes it, or by which saturated fat induces some other biological response that leads to oxidative stress. How or why an organism would evolve such responses escapes me, and I'm not aware that any have been experimentally or even theoretically identified.

Let's compare the utter lack of hypotheses tying saturated fat to heart disease with an alternative: athersclerosis is at least partially caused by excess sugar and polyunsaturated fat. Here's the rationale (discussed in several places on the web):
  • Fats are transported in the blood in lipoproteins.
  • Lipoproteins tend to embed themselves at places where arteries sustain damage, such as branches (veins, under considerable lower pressure, do not exhibit atherosclerosis).
  • Macrophages have specific receptors for LDL that has been damaged by oxidation or glycation, but no receptor for undamaged LDL. Oxidized/glycated LDL is consumed by the macrophages, which can lead to accumulation of "foam cells" forming the atherosclerotic plaque.
  • Consumption of polyunsaturated fat promotes oxidation of LDL. Lipoproteins consist of a large protein coat, interspersed with phospholipids (two fatty acids on a phosphate backbone). If the phospholipid contains a polyunsaturated fatty acid, it is more susceptible to oxidation.
  • Increased blood sugar, either via consumption of large amounts of refined carbohydrates or due to metabolic dysfunction (e.g. insulin resistance) increase the potential for glycation damage of LDL, where the sugar binds to the protein and alters its structure in a manner similar to oxidation.
  • So, PUFA + sugar = damaged LDL = inflammatory response = atherosclerosis.
Now, I'm not saying this is the answer, but it at least makes logical sense, follows currently accepted beliefs about chemistry and human biology, and I would guess explains the epidemiological evidence at least as well as the saturated fat hypothesis (which never was particularly clear to start with). It's certainly better than "we have no idea", as put forth in textbooks and by "experts". More research is needed to really nail things down, but I think we can see that the PUFA/sugar hypothesis, which has some basis in accepted science, should receive a greater weight than the saturated fat hypothesis, which has no basis whatsoever.

Another issue is dietary fiber. Dr. Miller is a big fan for the usual reasons: fiber makes you feel full, scrapes the inside of your colon, etc. The satiety argument requires a very narrow view of appetite regulation, as I discuss here and here. It is true that the mechanical distension of the stomach contributes to satiety, probably both via nervous system signals and suppression of ghrelin secretion. But those are two of many other nervous and hormonal signals indicating the macronutrient and energy content of food, energy availability in the body, etc. Fiber has no effect on these other aspects. That's to be expected - otherwise we'd be able to eat only highly fibrous food with little energy content, feel full, and wind up starving to death. Not a very good evolutionary strategy.

Dr. Miller discusses the origins of the fiber hypothesis, from Denis Burkitt's work in Africa. Taubes gives a more detailed history in GCBC. It is interesting to note that Burkitt's hypothesis originated from Peter Cleave's "saccharine-disease hypothesis", namely that refined carbohydrates were at the root of a host of modern diseases. Burkitt was initially impressed with Cleave's work, noting that Cleave possessed "perceptive genius, persuasive argument and irrefutable logic." But over time he modified the argument to accent the absence of fiber rather than the presence of refined carbohydrates. Now, there's nothing wrong with this hypothesis per se, but one must be aware that testing refined vs. unrefined carbohydrate in the diet does not distinguish between these hypotheses: unrefined carbohydrate has more fiber. And there is other evidence that absence of fiber is not the health issue it's made out to be. Says Taubes:

Burkitt and Trowell called their fiber hypothesis a "major modification" of Cleave's ideas, but the never actually addressed the reasons why Cleave had identified refined carbohydrates as the problem to begin with: How to explain the absence of these chronic diseases in cultures whose traditional diets contained predominantly fat and protein and little or no plant foods and thus little or no fiber - the Masai and the Samburu, the Native Americans of the Great Plains, the Inuit? And why did chronic diseases begin appearing in these populations only with the availability of Western diets, if they weren't eating copious fiber prior to this nutrition transition? Trowell did suggest, as Keys had, that the experience of these populations might be irrelevant to the rest of the world. "Special ethnic groups like the Eskimos," he wrote, "adapted many millenia ago to special diets, which in other groups, not adapted to these diets, might induce disease." Trowell spent three decades in Kenya and Uganda administering to the Masai and other nomadic tribes, Burkitt had spent two decades there, and yet that was extent of the discussion.

Sounds like Keys, Burkitt, and Trowell could all use the boot treatment I described above. Taubes' discussion highlights another related hypothesis, namely that red meat is bad, for which the argument at least partially stems from the fiber hypothesis. We can't distinguish between the unrefined carbohydrate and fiber hypotheses by exchanging refined for unrefined carbohydrates, but we can distinguish between red meat vs. fiber hypothesis by exchanging red meat for whole fruits, vegetables, and grains. Taubes addresses this as well:

By the end of the 1990s, clinical trials and large-scale prospective studies had demonstrated that the dietary fat and fiber hypotheses of cancer were almost assuredly wrong, and similar investigations had repeatedly failed to confirm that red meat played any role* (*Those clinical trials that tested the dietary-fat-and-fiber hypotheses of cancer, as we discussed earlier, replaced red meat in the experimental diets with fruits, vegetables, and whole grains. When these trials failed to confirm that fat causes breast cancer, or that fiber prevents colon cancer, they also failed to confirm the hypothesis that red-meat consumption plays a role in either.) Meanwhile, cancer researchers had failed to identify any diet-related carcinogens or mutagens that could account for any of the major cancers. But cancer epidemiologists made little attempt to derive alternative explanations for those 10 to 70 percent of diet-induced cancers, other than to suggest that overnutrition, physical inactivity, and obesity perhaps played a role.

Long story short: when scientists looked specifically for a causal link between fiber and cancer prevention or red meat and cancer causation, they found diddly-squat. Since these hypotheses were originally generated by weak epidemiological evidence, the contradictory evidence from more controlled trials weakens the hypotheses further. The refined carbohydrate hypothesis, on the other hand, provides considerable explanatory power and consistency with known biological properties of cancer, such as the necessity for cancer growth to be driven by insulin and a ready supply of glucose. The refined carb hypothesis also explains all of Dr. Miller's epidemiological observations. Given the above, it certainly seems more likely that, for instance, the absence of colon cancer in Cameroon has less to do with the presence of fiber than the absence of refined carbohydrates.

Finally, Dr. Miller appears to be significantly misinformed as to what is considered a "low carbohydrate diet". She generally uses the term "high protein diet", which underscores the root of the misunderstanding. The term "high protein diet" presumably indicates that it is low in both carbohydrates AND fat, which is problematic to health. One of my favorite books (which The Jungle Effect has inspired me to re-read) is Marvin Harris' Good to Eat. Harris notes that indigenous cultures never get the bulk of their calories from lean protein. Energy is invariably provided by fat and/or carbohydrate, with the amount of protein being remarkably constant across cultures. Dr. Miller correctly notes the underlying reason for this: protein is "dirty" fuel. Not being a pure hydrocarbon like sugar or fat, the conversion of protein to energy essentially results in pollution from nitrogen and other substances, which our kidneys then need to filter. A high-protein diet can overload the body's ability to dispose of these toxins, leading to sickness and ultimately death, even though plenty of food is provided. Dr. Miller relates experiences of some of her patients on high protein diets, that they basically felt unsatisfied and craved carbohydrates. This matches nicely with the phenomenon of rabbit starvation, where pioneers would feast on extremely lean rabbit meat, only to still be hungry. After some time they would eat 3 to 4 pounds of rabbit at a sitting, yet ultimately would waste away and essentially starve to death with full stomachs. It is no surprise that a high protein diet (as defined by restriction of both carbohydrate and fat) is doomed to failure.

But "low carbohydrate" does not necessarily imply "high protein". Indeed, had Dr. Miller read any of the large number of books on low carbohydrate diets, Googled the topic, or consulted with any number of experts, she would have found the recommendations are generally for high fat. This is at odds with the idea that fat is unhealthy, of course, so it may not be surprising that those adhering to current nutritional dogma would infer that any healthy diet must be low fat, so lowering carbohydrate leaves only raising protein. As discussed in the last post, our beliefs are always conditioned on other beliefs, and obviously placing undue weight on some supporting hypothesis leads to poor inferences. Even a moment's consideration of the Inuit diet, for example, would indicate the true nature of a healthy low carbohydrate diet.

Anyway, I could spend many more paragraphs discussing The Jungle Effect (I made a ton of notes while reading - something that the Kindle, for all its flaws, is good for). But I think you get the point. Hopefully Dr. Miller takes the time to read this review in the intended spirit, which is not to bash her work. I think some of what she espouses has value, and I love the idea of studying indigenous diets, provided it's done in an appropriately broad context. But the conclusions one draws from this study need to be consistent with the all of the actual relevant scientific evidence, not just the arbitrary socially-driven beliefs that form "consensus". Otherwise you risk coming to unjustifiable and/or inconsistent conclusions and sub-optimal recommendations, forcing the addition of ad hoc hypotheses, artificial dietary rules, etc. As I've said, changing lifestyles is hard enough without a lot of extra rules to follow. A healthy diet should be easy. Making it hard and motivated by inconsistent rationales just reduces the chances that people will actually make the change and improve their health.

Monday, October 6, 2008

Information, Knowledge, and Wisdom

There are three kinds of lies: lies, damned lies, and statistics.
Benjamin Disraeli, British politician (1804 - 1881)

Statistics: The only science that enables different experts using the same figures to draw different conclusions.
Evan Esar, Esar's Comic Dictionary, American Humorist (1899 - 1995)

Where is the knowledge that is lost in information? Where is the wisdom that is lost in knowledge?
T.S.Eliot (1888 - 1965)

As discussed in the last post, we are today faced with a dizzying array of contradictory "recommendations" when it comes to health and lifestyle, particularly what to eat. How is it possible that all of these experts come to such differing conclusions? The quotes above illustrate certain root aspects of the problem. First, there is a gross misunderstanding of what "statistics" really is meant to accomplish, how it is properly applied, and what the answer "means". Second, and at least partly because of the first issue, the vast quantities of hard information ("facts" or "data") we have available tend to get filtered and twisted to erroneous conclusions, often supporting goals (e.g. "sell more books") other than maximizing the health of the population. Finally, despite the apparent rigor, technology, and expertise applied for answering key scientific questions, the process of actually turning scientific results into useful decisions is generally an exercise muddled thinking rather than rational inference.

The first two quotes embody general perceptions about "statistics". Those holding these views, by the way, include most professional scientists. When I worked as a research scientist in gamma-ray astrophysics, I can't tell you how many times colleagues would say things like "You can get any answer you want using statistics". Of course, they were quite happy to (supposedly) get the answers they wanted via application of statistical methods, but that irony isn't the point, because the incontrovertible truth is precisely opposite: there's only one right answer. Esar's quote embodies this attitude, so let's pick it apart. First, statistics is not a "science". Science involves the broader exercise of observing, modeling using mathematics, and interpreting observations in terms of those models. Statistics is just math, and as in any well-posed mathematical problem, there's only one right answer. I've thought of only three ways in which two scientists could come up with different answers to the same statistical problem:
  1. Somebody made a math error (happens more often than you think, see Point 3).
  2. They used different input information (in which case it's not really the same problem in both cases).
  3. They used different approximate methods (often badly) to solve the problem.
You may remember learning statistics in school. Most people hate it, because by and large, it doesn't really make any sense. "Statistics" as it is usually thought of consists of a big cookbook, recipes for diddling around with "data" and turning them into some small set of numbers called "statistics", like mean and variance. But there's no core concepts tying the recipes together, just lots of rules like "if the data looks like this and you want to ask this question, then use such-and-such technique". What you may not remember (or even have been taught) is that "such-and-such technique" is usually not an exact mathematical result, but only a reasonable approximation when the "if" part of the rule is close to being true. This part is forgotten far too often by people who should know better. For instance, many statistical tests only become exact in the limit that the number of degrees-of-freedom (# of data points minus # of model parameters) goes to infinity. If such tests are to be good approximations, you need lots of degrees of freedom, but I've seen professional scientists in peer-reviewed publications blithely apply those tests only valid in the infinite limit to a model with three parameters, and data containing only five points. Last time I checked, "5-3=2", and "2" is not a particularly good approximation of "infinity".

But the problem runs even deeper, because even when scientists do the math right and apply the recipes under the appropriate conditions, more often than not they're still not answering the right question. Pure science attempts to answer questions like "does eating lollipops induce insulin resistance". Applied science wants to actually make decisions, e.g. "should I eat this lollipop, given what I know about it's effects on insulin resistance, the effects of insulin resistance on health?" etc. Almost always, the answers given in scientific papers are of the form "We compute a 95% probability that we would have observed these data if lollipops cause chronic insulin resistance". That's a much different statement than "We compute a 95% chance that lollipops chronically increase insulin resistance given the data we observe and other prior information," and only this latter statement is of any use on the applied end of things, when deciding whether or not to eat lollipops.

A very simple example might help to illustrate the issues. Suppose somebody wants you to gamble on a coin flip: you bet $1, and if the coin comes up heads, you win your dollar plus another $2. If the coin comes up tails, you lose your $1. How do you decide whether or not to play this game? Right from the start, we can see standard statistics is going to have trouble, because you have no data. Intuitively you might guess that there is a 50% chance of heads, and indeed if you had no other information at all, you would be right. With two possibilities, and no information to distinguish which would be more likely, you would assign equal probabilities to both outcomes. Our decision to play or not comes down to how much money we'd expect to have in each case. If we don't play, we have a guaranteed $1 in our pocket. If we do play, then with no other information there's a 50% chance we'll have $3, and a 50% chance we'll have zero, so $3*0.5 + $0*0.5 = $1.50. On average, we have $1.50 if we play and $1 if we don't, so we should play, given that we lack any other information about the game.

But what if our information were different? For example, suppose somebody we trust tells us that she knows the coin-flip guy is a scam artist, having been arrested several times. Now what do you do? Most people would now intuitively keep their money, and indeed a mathematical analysis would likely indicate that this is the proper course of action, as on average you would now expect to lose more often than not. But notice that the only thing that has changed is our knowledge about the game: the coin being flipped is the same, as is the person doing the flipping, and presumably the laws of physics governing coin flips. Purposely ignoring this new information would be fantastically stupid, particularly given that it came from a trusted source. Note that we still have no "data" in the traditional sense.

What if we had some data? Suppose the person we're playing against offers to let us flip the coin one hundred times before betting, and 99 out of 100 come up heads. Now you have some more information about the coin. Assuming that you're not doing something to introduce a bias, you have some additional confidence that the coin itself is biased towards heads, even more inducement to play the game, because the likelihood that you would have observed 99 heads out of 100 flips would be low for a fair coin. But that data and the associated likelihood are only part of the picture: do you now ignore the input of your trusted friend? Does your data trump the information they provided? Obviously it cannot. The aforementioned likelihood of seeing 99 of 100 flips come up heads was calculated with the a priori assumption that the game was fair. But your friend's input tells you that fairness is unlikely, and given your other information about scam artists, the likelihood that you saw 99 heads when you were flipping the coin and no money was at stake should be considerably higher. And the likelihood of 99 heads answers the wrong question anyway. Our decision to play or not must be based on the probability that the coin, when flipped by our opponent, will come up heads, not on the probability that you would have flipped 99 heads in 100 tries assuming the coin was fair.

The key point here is that information is information is information. The data gathered in a particular experiment is just more information which can be used to update our beliefs in different hypotheses. But most experiments don't start from zero, where the gathered data is the only available information. Usually others have conducted experiments that gathered other data. There's generally other relevant information as well. In the lollipop/insulin resistance example, we know that the glucose in the lollipop raises insulin, and that the fructose may at least temporarily contribute to insulin resistance. Any reasonable analysis must include this additional information when evaluating our belief in the hypothesis under test ("lollipops induce chronic insulin resistance"). Ignoring this information is no different than arbitrarily excluding data from our analysis (after all, data is just a kind of information); yet this is precisely how most scientific results are presented and interpreted.

Have a headache yet? I know, I know, this is some tough material. The whole area of reasoning under uncertainty is mathematically and philosophically deep. This is about the fourth time I've tried writing a reasonably accessible post, and have concluded that it's fundamentally hard to talk about. But if you're going to make good decisions about your health, it's good to have at least some idea where the flaws are in most scientific analyses, and also how to think about the issues in the "right way" so as not to be misled. So let's take a moment to review the key lessons you should take away at this point:
  1. Though I didn't explicitly say it above, the notion of a probability really reflects the degree of belief in some statement (e.g. "the next coin flip will be heads"). Probabilities are real numbers between 0 and 1 (or 0% and 100%), where 0 represents absolute belief that the statement is false, and 1 absolute belief that it is true.
  2. We don't necessarily need "data" to assess the probability that a statement is true, any type of information will do. There's nothing special about data, they're just more information to be used in updating degrees of belief (probabilities).
  3. To properly assess the probability of a hypothesis, we must include not just the data, but also any other relevant background information.
  4. If the outcome of a decision you're making depends on a hypothesis being true, then you need to know the probability of that hypothesis being true given all of the relevant available information. The probability that some particular data would have been observed assuming the hypothesis to be true necessarily ignores relevant information, precisely because it assumes the truth of the hypothesis without accounting for the possibility that the hypothesis is false. This is impossibly circular: you can't assess the degree of belief in a hypothesis if your analysis uniformly assumes it to be true.
Hopefully you can start to see why nutritional science and the associated recommendations are all over the place. In practice, scientists very often
  • Selectively ignore prior information;
  • Misapply statistical approximations to calculate the wrong number (probability of data assuming hypothesis is true);
  • Interpret their results as indicating absolute truth or falsehood;
  • Perform this interpretation via vague mental gymnastics rather than rigorous mathematics.
This situation is all the more vexing in the Information Age, where everyone basically has access to the same set of information. Thus all scientists can quickly call up published papers, experimental databases, etc. The input information is effectively the same for everybody, yet the output conclusions are nearly as numerous as scientists themselves. But looking at the four bullets above, we see ample opportunity to create divergent conclusions and recommendations with no hope of reconciling them. The last two are particularly troubling in the context of nutrition, because there's no hope of making decisions of what to eat when conflicting hypotheses are presented in terms of absolute truth/falsehood, with little visibility as to the actual mental manipulations that go into making that assessment. Lesson 4 above tells us that we need a probability of truth in order to make a decision, because we have to evaluate the expected outcome accounting for the possibility that the relevant hypothesis(es) may be false. Go back to the coin-flip game. Suppose we were forced to assert with absolute certainty whether the next flip would be heads. We get a different a different decision for "true" than for "false"; but we don't actually know what the outcome of the flip will be, and there can only be one right decision in terms of maximizing our expected winnings given our information about the uncertain outcome of the coin flip.

Now it may sound as if the picture is bleak for science, but rather amazingly, science seems to eventually bumble around to the correct conclusions. It's just a highly inefficient process because of the issues above. Scientists tend to hold on to certain "widely believed" hypotheses like grim death regardless of the actual evidential support; but eventually there comes a point when evidence for an alternative hypothesis becomes so overwhelming it becomes impossible to ignore (if you're paying attention, you can watch this process at work right now for low-carb diets). Science would benefit greatly, of course, by adopting a more rigorous analytical approach addressing the issues above. Such an approach exists, generally denoted "Bayesian Statistics". I don't like this term, since the methodology neither focuses on "statistics" per se (rather on probabilities), and it's namesake the Rev. Thomas Bayes really made only a tangential contribution to the whole business. "Probability Theory" is a more apt term, reflecting the idea that it extends the idea of logic to the case where we're not 100% sure of the truth/falsehood of statements.

At the end of the post, I'll briefly discuss Probability Theory further and give a few references for those who are interested in the technical details. But for those just trying to puzzle through the maze of information presented by the media, doctors, etc. we can borrow some of the ideas from Probability Theory, putting together a way of thinking about evidence (information), hypotheses (knowledge), and decisions (wisdom). The T. S. Eliot quote at the top describes the situation we wish to avoid, one that many people experience now, struggling to make wise choices when faced with an avalanche of information and knowledge from different sources.

So let's see how we can apply the four lessons above in everyday thinking.
  1. Probabilities are just numbers representing degrees of belief. I'm not suggesting you carry a bunch of numbers around in your head to track your beliefs, but do recognize that most ideas are neither absolutely true nor absolutely false. We intuitively recognize that such absolutism is pathological, as seen by the often bizarre irrationality exhibited by dogmatists, who refuse to move from a position regardless of the weight of the evidence against that position. Probability Theory encapsulates that behavior mathematically. When new evidence is introduced, Probability Theory gives a formula for updating your beliefs (see math below), basically multiplying your current probability by the weight of that new evidence. But zero times anything is zero, i.e. if you were absolutely sure your current idea was right and all others were wrong, no amount of evidence would ever change your probability. So make sure you are always flexible in reassessing your beliefs. Mental discipline is required. The brain's natural tendency is to seek absolutes, as exhibited by the phenomenon of cognitive dissonance. Learn to be comfortable with uncertainty. Decisions can still be made in the absence of certainty; as Herdotus said, "A decision was wise, even though it led to disastrous consequences, if the evidence at hand indicated it was the best one to make; and a decision was foolish, even though it led to the happiest possible consequences, if it was unreasonable to expect those consequences."
  2. Just because you're not a scientist (or even if you are) and don't have detailed access to scientific data, it does not mean you can't weigh evidence and update your beliefs. Information is information is information, whether its numbers or a brief newspaper story. The trick is in getting the weight in the right ballpark. A good rule of thumb: individual reports or results generally should not sway your belief very much. Strong belief is usually built on multiple independent results from different sources.
  3. Be sure to include all of the information you have available. Another manifestation of cognitive dissonance: when presented with evidence contradicting a strong belief, we give it zero weight. That's a mistake. Contradictory evidence should lessen your belief at least a little, like it or not. Do include evidence from all sources, including anecdotal and personal experience. Just be careful not to overweight that evidence. Be aware that truth is usually conditional. Take the following hypothesis: "You can't become obese on a zero-carb diet." The truth of that hypothesis is conditional on other hypotheses, e.g. "Insulin is the hormone governing fat storage" and "Insulin is primarily driven by carbohydrate consumption". Changes in the belief of these supporting hypotheses necessarily changes the belief in the main hypothesis, for example knowledge of the ASP pathway for fat storage changes our belief that insulin runs the show, and hence modifies our belief that zero-carb diets make you immune to obesity.
  4. We saw at the beginning that there are only three ways that scientists should disagree when assessing hypotheses. Adapting that to mental inference, disagreement implies that one or both people are irrational and/or have different information. Don't waste your time with arguing irrational people. Anybody who says things like "we'll have to agree to disagree" is irrational, because they have no information supporting their position and/or are unwilling to accept information that may modify their beliefs. But if you find yourself in disagreement with someone who seems rational, then engage in discussion to share the differing information that is at the root of your disagreement. You may not come to agreement - it's difficult to extract all relevant information and knowledge from somebody's head - but you at least will likely learn something new.
  5. Decisions require not only the quantification of information as probabilities (or at least some qualitative mental equivalent), but also a clearly defined goal. The goal in our coin-flip game was straightforward: on average, maximize the amount of money in your pocket. It's not so easy to quantify the goal of maximizing health. People try, which is why doctors love to measure things like cholesterol and blood sugar, but such metrics can only provide a narrow view of one particular aspect of overall health (and even if they didn't, treatment decisions are generally not properly analyzed anyway). Treatment decisions often involve modification of one or a small set of such numbers, which is incredibly myopic as it ignores overall health (hence the spectacular failure of "intensive therapy" to lower blood sugar in Type II diabetics by pumping them full of insulin). Remember also to include the potential long-term effects of your decisions, e.g. cranking up the insulin of those Type II diabetics lowers blood-sugar in the short-term, but increases probability of early death, which presumably outweighs the short-term benefits.
The above is sort of a loose mental approximation to Probability Theory and Decision Theory. Those doing actual scientific research should be doing inference within the rigorous mathematical framework. I want to briefly discuss this, and I'll provide a few references as well. A full discussion of the subject would (and does) fill one or more large books; yet the conceptual basis is fairly simple, so I'll focus on that. The important thing to remember here is this: it's just math. There are a small number of concepts that we must accept axiomatically, and everything else follows mathematically. Arguments against the use of Probability Theory for scientific inference must necessarily focus on the fundamental concepts, because everything else is a mathematically rigorous result following from those concepts, listed below:
  1. Degrees of belief (probabilities) are represented by real numbers.
  2. Qualitative correspondence with common sense, e.g. if your belief in some background information increases (e.g. "Coin-flip guy is cheating") then so should your belief in a hypothesis conditioned on that information ("I will lose the coin flip when coin-flip guy does the flipping").
  3. The procedure for assessing degrees of belief (probabilities) must be consistent, where consistency can be described in three ways:
    1. If a conclusion can be reasoned out in more than one way, then all ways must lead to the same answer.
    2. Conclusions must be reached using all of the available evidence.
    3. Equivalent states of knowledge lead to the same probabilities based on that knowledge.
That's it. The whole of Probability Theory follows from these ideas, which seem to form a sensible and complete set of principles for scientific inference.

To make use of Probability Theory, we need some mathematical rules for manipulating the probabilities of different propositions. A little notation first: let A|C mean "A is true if C is true". AB|C means "A and B are true given C", while A+B|C means "A or B is true given C". Let ~A|C mean "A is false given C". If p(A|C) denotes the probability that A is true given C, then we have the following product and sum rules:
  • p(AB|C) = p(A|C) p(B|AC) = p(B|C) p(A|BC)
  • p(A + B|C) = p(A|C) + p(B|C) - p(AB|C)
From the product rule, it follows that absolute certainty of truth must be represented by the value 1, since 1 times 1 equals 1. Further, since A and ~A cannot be simultaneously true, we wind up with 0 representing absolute certainty of falsehood, e.g. if p(A|C) = 1, then p(~A|C) = 0. It can be proven that these rules are uniquely determined, assuming probabilities are represented by real numbers (Concept 1) and structural consistency (Concept 3.1).

That's most of Probability Theory, IMHO far more conceptually elegant and mathematically simple than the mess of statistics most of us were taught. That's not to say that actually solving problems is necessarily easy, but with a sound conceptual basis and simple rules, it's a lot easier to solve them consistently, get numbers that actually make sense, and combine different scientific results to understand their impact on various hypotheses.

This last point is important. We discussed earlier how new information ("data") must be used to update our beliefs. We shouldn't look at two different scientific results and try to pick between them. Rather our belief in a hypothesis derived from the first result must be adjusted when we get the second result. Try figuring out how to do this using standard statistics. The Probability Theory recipe for this follows trivially from the product rule. Let's rewrite the second equality in the product rule as follows:
  • p(H|DI) p(D|I) = p(D|HI) p(H|I)
where "H" represents a hypothesis being tested, "D" some observed data, and "I" our prior information (which might include results of other experiments, knowledge of chemistry, etc.) Each term represents a different proposition:
  • p(H|DI) : The probability that our hypothesis is true, given the observed data AND background information. This is called the posterior, and is the key quantity for scientific inference and decision-making.
  • p(D|I) : The probability that we would have observed the data given the background information independent of the hypothesis, called the evidence.
  • p(D|HI) : The probability that we would have observed the data given both the hypothesis AND the background information, called the likelihood.
  • p(H|I) : The probability that the hypothesis is true given only the background information, denoted the prior.
With a single algebraic step we can solve for the posterior, which is the quantity of interest:
  • p(H|DI) = p(D|HI) p(H|I) / p(D|I)
So we now have a very simple recipe for updating our beliefs given new data. I find it to be intuitively nice and tidy: to update your new belief from the old, multiply by the ratio of the likelihood (probability for measuring the data given the hypothesis and other information) to the evidence (probability that you would have seen the data in any case). If your hypothesis increases the probability of obtaining that dataset, then your belief in that hypothesis increases accordingly, and vice versa. If your hypothesis tells you nothing about the data, then the ratio is 1, and your belief does not change.

This formula goes by the name of Bayes' Theorem, so named for the Rev. Thomas Bayes who originally derived a form published in a posthumous paper in 1763. The version shown above was actually published by Laplace in 1774, so we see these ideas have been around for awhile. The power of Bayes' Theorem is hopefully clear: given some prior probability, i.e. our degree of belief in a hypothesis, we know how to update the probability when new data is observed, independent of how we arrived at our prior probability. So no matter what experiments I did (or even if no experiments have been done) to arrive at p(H|I), I can simply update that belief given my new data. Note that the term usually reported in scientific results is the likelihood, which is only part of the story.

If you've ever looked at a "meta-analysis", where somebody tries to combine results from many different experiments, you may have noted that it involves a lot of statistical pain, and often includes cutting out some results (e.g. favoring clinical over epidemiological studies), which violates the whole idea of using all available information. This sort of combination would be straightforward using Probability Theory, presuming all of the original results to be combined were also derived with Probability Theory. No reason to leave out some of the results due to "lack of control". A proper Probability Theory treatment would, for example, quantitatively account for the large number of "uncontrolled variables" (which really implies a lack of information connecting cause and effect) in a population study and adjust the probabilities accordingly.

Now, the few of you who have actually made it this far may be wondering why, if Probability Theory is so much better than standard statistics, is it not widely applied? As with many such situations in science, the answer is complicated, and at least partly tied up with human psychology and sociology. You can read about it more in the references, but I'll hit a few high points. It is interesting to note that Probability Theory was accepted and used prior to the mid-19th century or so. Laplace, for example, used it to estimate the mass of Saturn with considerable accuracy, so much so that an additional 150 years of data only improved the result by only 0.63%. Despite this, there were some technical problems. One is that the mathematical equations arising from application of Probability Theory can be difficult or impossible to solve via pencil and paper. This is largely alleviated by using computers to do the calculations, but 19th century scientists did not have that option.

There were also philosophical issues. Nineteenth-century thinkers were pushing toward the idea that there existed some some sort of objective scientific truth independent of human thought. The idea that probabilities represented degrees of belief was apparently too squishy and subjective, so they adopted the idea "let the data speak for themselves", and that probabilities reflected the relative frequencies of different measured outcomes in the limit of an infinite number of observations. So if you flip a fair coin infinitely many times, exactly half of the outcomes (50%) would be heads. At the core of the philosophical disagreement lay a couple of technical difficulties. First, there was no known reason to accept the sum and product rules as "right" within the context of Probability Theory (one could propose other rules), yet they arose naturally from the frequency interpretation (Cox later showed the rules could be uniquely determined assuming the basic concepts of Probability Theory). Second, Bayes' Theorem tells us how to update our beliefs given data. But if you "peel the onion" so to speak, going back to the point before any data had been collected, how do you assign the prior probability p(H|I)?

This proved to be a sticky problem. Special cases could be solved, e.g. it's clear that for the coin-flip problem with no other information that you should assign 50%/50%. But for more complicated problems where one had partial information, no general method existed for calculating a unique prior. It wasn't until the 1950's that physicist Edwin Jaynes successfully addressed this issue, borrowing ideas from information theory statistical physics. Jaynes introduced the idea of Maximum Entropy, which basically told you to assign probabilities such that they were consistent with the information you had, while adding no new information (information theory tells you how to measure information in terms of probabilities; entropy is just a measure of your lack of information). The underlying arguments are deep, stemming from the idea of Concept 3.3 that equivalent states of knowledge represent a symmetry, and that your probability assignments must reflect that symmetry. To do otherwise would be adding information without justification. But the horse was out of the barn at that point. The frequency approach had been used in practice for decades, and even in the 50's the computing technology required for practical widespread use of Probability Theory did not exist.

Today, of course, computers are cheap and ubiquitous, and indeed the use of Probability Theory is beginning to increase. But the progress is slow, and as is often the case, widespread change will require the next generation of scientists to really grab the idea and run with it while the current generation fades away.

Whew, that was quite the marathon post. I've hardly done the topic justice, but hopefully you at least got some ideas about what's wrong with how scientific inference is presently done, how you can avoid being confused by apparently conflicting results, and where the solution lies. Below are the promised references.
  • Probability Theory: The Logic of Science, E. T. Jaynes: The "bible" of Probability Theory. Jaynes was perhaps the central figure in the 20th century to advance Probability Theory as the mathematical framework reflecting the scientific method. This book is jam-packed with "well, duh" moments, followed by the realization that almost everyone in science reasons in ways which range from unduly complex and opaque to mathematically inconsistent. Not an easy book to read, full of some difficult math, but also plenty of conceptual exposition and very clear thinking about difficult topics. Jaynes does tend to rant a bit at times, but usually against determined stupidity. Required reading for all scientists, and anybody who needs to make critical decisions in the face of incomplete information.
  • Articles about probability theory: an online collection, including the works of Jaynes. You can download the first three chapters of "Probability Theory" in case you want a taste before plunking down 70 bucks. I particularly like this article detailing the historical development.
  • Data Analysis: A Bayesian Tutorial, D. Sivia and J. Skilling: A more pithy presentation aimed at practitioners. Clearly written without being too math-heavy, "Data Analysis" hits the high points and illustrates some key concepts with real-world applications. A good place to get your feet wet before tackling the intellectual Mt. Everest of Jaynes' book.

Monday, September 29, 2008

Think Bigger, Eat Simpler

Hi everybody. Sorry to have dropped off the blogosphere, been busy with the day job. I'm going to steal a few minutes from writing software to share a few thoughts.

I've been increasingly frustrated with the narrow thinking and shoddy logic which goes into "dietary recommendations", because it seems to me that the situation is really pretty simple. Everybody and their sister is getting into this game, with the drive towards very specific "eat 12% of this" sort of thinking, usually accompanied by some sort of pseudo-logical justification. Example: Jimmy Moore had a great interview (Part 1, Part 2) with Dr. Stephen Gundry, author of yet another diet book. In the interview, Gundry claims we should eat 95% plants. Why? Because that's what gorillas eat. Gorillas are genetically similar to humans, and maintain massive size and very low body-fat. That argument ignores the important differences between gorillas and humans, not the least of those being gorillas' vastly larger digestive tract and jaws with associated musculature, both required for effective digestion of large quantities of plant matter (gorillas eat upwards of 30 lbs. of vegetation daily). Indeed, by Dr. Gundry's logic of genetic similarity, we should eat like chimpanzees, genetically closer to us than gorillas, and whose diet is mostly fruit. Maybe Dr. Gundry doesn't like fruit, or chimps aren't muscley enough for him.

Another example, also courtesy of Jimmy Moore: Dr. Richard Johnson, author of another of the plethora of diet and nutrition books, claims that it's really fructose that is the root cause of our current rash of metabolic diseases. I think the metabolic science certainly indicates that overconsumption of fructose has significant potential for negatively impacting health. But in his explanation to Jimmy, Dr. Johnson focuses on animal studies rather than the underlying metabolic processes. Animal studies as such can never be more than suggestive about the corresponding effects in humans. Worse, Johnson seems to think that his fructose hypothesis excludes all other hypotheses as to the origin of obesity and other metabolically-related diseases, specifically targeting Taubes' hypothesis from Good Calories, Bad Calories. That's shaky ground, as Taubes builds on broadly accepted fundamentals of biochemistry and cellular biology, while Johnson largely seems to be making inferences from mouse studies. But the thing that annoys me the most is that the two hypotheses are clearly not mutually exclusive; indeed, one might expect the two effects to be synergistic in driving the development of metabolic syndrome, an effect certainly reflected in the decreasing average age of the onset of Type II diabetes.

At the top of the whole mess is the government's food pyramid. I don't have the time to get into it here, but if you want a real laugher some time, you should read the scientific justification for the food pyramid. And for the life of me I can't understand why our society takes that as the standard for nutrition. Anybody who's even semi-conscious can look around right now and see how skilled our government is in screwing things up just as badly as possible. Why would their nutritional recommendations be any different? Jimmy Moore recently pointed to a bit in the New York Times showing how diet has changed since 1970. You'll note that the major changes involve food pyramid-ish recommendations, like eating more veggies, grains, and vegetable oils. Yet public health is swirling down the toilet, but maybe that's to be expected given the usual governmental competency.

I would assert that there is no single "expert opinion" that constitutes the final word on human nutrition. Most of these experts are trying to sell books, videos, food, etc. and to do so must necessarily distinguish themselves from all of the other "experts". By the way, this extends to the government as well, as you'll find that appointees to head departments like the USDA generally come from the food industry. As a society, we've fallen into the trap of uncritical reliance on "experts" to tell us how to live our lives, rather than using our own brains to figure some fairly obvious things for ourselves. I can tell you from first-hand experience that, on average, most scientists are no smarter than you, regardless of your education or profession. More than anything, scientists know a lot of big words and few actual concepts, and spend most of their careers blindly misapplying them.

In the excellent book The Omnivore's Dilemma, Michal Pollan points out that omnivores devote a significant amount of brain-power to figuring out what is edible. That's an obvious necessity when you can eat just about anything. Humans and their big brains represent the extreme endpoint of this pattern. Two million years of human evolution have pushed brain development with the primary goal of becoming more effective at hunting and gathering food, allowing us to populate an extraordinary variety of ecological niches around the planet. Only over the last several thousand years, and mostly the last 100 or so, have we sacrificed evolution's gift in favor of "experts": doctors, scientists, media, government, etc. If you want to eat healthy, you need to use that big brain and figure out what works for you.

I don't want to offer yet another "expert" opinion on this topic, but to get the thinking started, let me throw out a few broad ideas. None of these are particularly original, but I think we need to "think bigger and eat simpler". A lot of the confusion around dietary recommendations stems, I believe, from the "experts" taking a narrow view, often a single idea (e.g. "animal products are bad", "all carbs are bad"). They also need to make the money you spend on their book seem worthwhile. Combine this with the narrow viewpoint, and you can wind up with "diet plans" that are complicated and difficult to follow. More often than not it also seems like the food sucks. Healthy eating should be easy - after all, we did it for hundreds of thousands of years with no experts or books. Major lifestyle changes are hard enough without having to do a bunch of math and suffering through unsatisfying meals.

First and foremost, realize what humans must have eaten during the course of evolving that big brain. We didn't have agriculture, nor factories to tear apart and reconstitute our food into unrecognizable forms. We ate more or less whole foods of both plant and animal origin. Debates about how much of this or that type of food are probably irrelevant, and both the archaeological record and modern hunter-gatherers indicate humans can thrive over considerable dietary variety. Ultimately humans require energy (preferably from fats or carbohydrates), protein, and a variety of micronutrients (vitamins and minerals). Whatever combination gets that for you is probably workable, as witnessed by the wide dietary variety of remaining hunter-gatherers who exhibit excellent health. The Inuit eat almost all animal products and get most of their energy from fat, while the Kitavans eat a large proportion of starchy vegetables, supplemented by seafood. Both groups exhibit comparable health, and little evidence of diabetes, heart disease, cancer, etc.

The key, I think, is the emphasis on whole foods, or at least foods that have a recognizable natural source, since that is what has been on the menu for all of human existence prior to the advent of agriculture. With very little extra thought, we can narrow that group down further by considering what could be reasonably hunted (just about anything that moves) or gathered. Consider grains, for example. Try this experiment some time: find a nice field full of wild grass that has gone to seed. Now go and gather up enough seeds to provide significant calories, say about a kilogram. Then process that grain into an edible form, using only naturally available tools like rocks (and if you actually do all of this, please don't eat the grains, because if you don't prepare them right you'll wind up in the hospital). Now compare that level of effort to digging up some root vegetables, picking fruit, or taking one of your grinding rocks and bonking a deer on the head. I think everyone who spouts off about "healthy whole grains" should be forced to do this exercise.

It doesn't necessarily follow that foods outside of human evolutionary experience are bad; but it seems highly unlikely that any food types we've eaten for the last two million years or so are likely to have much negative impact on health. Processed foods don't have to be bad for you, but what's the point of eating them? We don't have to eat processed food to get our nutrients, and more often than not the processing destroys much of the nutritional content, while potentially exposing us to nutrients in forms and quantities we are not designed to handle (e.g. high-fructose corn syrup, refined starch, high concentrations of polyunsaturated fats, lectins from grains). In fact, if there's one thing I think just about all diet gurus would agree upon, it's that we should be trading in more refined foods in favor of whole foods (just remember that despite the marketing phrase "whole grain foods", whole grain foods are nearly all highly refined).

Do some research on the actual nutritional content of foods, and apply a little critical thinking. When you compare with the usual (and usually dogmatic) recommendations, you are going to find lots of surprises. For instance, NutritionData.com cites spinach as being "a very good source of calcium", yet you'd have to eat about two pounds of spinach to get 100% of the RDA of calcium (and of course the RDA levels were designed to avoid overt disease, not optimize health). I'm not saying you shouldn't eat spinach, but rather be aware of what you're actually getting for realistic intake as opposed to trusting vague characterizations like "a very good source". An interesting and surprising exercise is to use NutritionData.com (or your favorite nutrition database) to find whole foods highest in various nutrients. I did this, comparing 200 calorie servings across all of the tracked nutrients; you can view the results here.

Another use for your big omnivore brain is paying attention to your body's response to different foods. As omnivores, such feedback is important. Throughout our evolutionary history our menu choices have often been very broad (much broader even than in the supposed "plenty" of modern life, where the apparent variety is really just different manipulations of corn, soy, and wheat). Maintaining optimal health would have required the ability to make distinctions between food sources in terms of nutrient density and availability without the use of a laboratory. For example, try filling your stomach with raw leaves (like spinach). I think you'll find that while your stomach feels full temporarily, you won't feel "satisfied", and will be hungry again soon (1 kg of raw spinach has only 230 kcal). If you finish a meal and are soon after thinking about the next meal, you're probably missing something. A good meal should make you feel good, not only right after, but also for most of the time until your next meal. For example, meals high in refined carbohydrates give most people a temporary "rush", but soon after result in a "crash". Don't try to overcome the crash with another rush.

Hunger is healthy, but you should only experience intense hunger if you're actually running a serious caloric deficit. If you're starving two hours after a 1000 kcal meal, something is screwed up, since that 1000 kcal should easily last you eight hours, assuming moderate levels of activity. Cravings are not necessarily a bad thing, but you should consider them in the context of two million years of human evolution. If you're craving something sweet, consider that it may be because your body is looking for the nutrients found in fruit. If you're craving something fatty, ignore the gurus' admonitions against fat, and just eat it. Again, your craving may not be so much for fat specifically, for for the nutrients that often accompany fat in whole foods. Satisfy your fat craving with a whole nutrient-dense source (e.g. avocado, grass-fed butter, coconut milk smoothie).

Keep it simple. No other animal has the capability to count calories or exercise "willpower", and humans had no need for such until the last century. Your body will tell you what it needs. You just need to listen, and use that big brain to filter the available options down to something that is reasonably likely to fulfill those needs as opposed to refined junk that temporarily tricks your body into thinking it's requirements have been met.

Finally, find yourself a person of advanced age who's still going strong. Modern life has pretty much destroyed the social cohesiveness humans experienced over our two million years of evolution, where those who successfully made it to old age could pass along their wisdom of how they got there. You will probably learn more of value from that individual than from a room full of diet book authors. Be sure to be open to what they tell you. For instance, more than once I've heard centenarians credit their health to a daily breakfast of bacon and eggs. Don't discount this as genetics or luck just because the bozo nutritionist at your gym said those things are unhealthy. The centenarian has spent 100 years listening to their body, while the nutritionist is just blindly repeating something from a book.

And though I'm sure one exists, I have yet to hear anyone credit a daily bagel and skim milk for reaching the century mark.

Tuesday, July 22, 2008

Energy Conservation: It's Not Just a Good Idea, It's the Law

Michael Eades latest blog points us to a review of Gary Taubes' Good Calories, Bad Calories by Dr. George Bray. Gary Taubes was given the opportunity to respond, and as usual, pretty much brings the wood, from the standpoint of logical clarity and consistency. I haven't read Bray's review in detail, but skimming over it I have to wonder how carefully he read the book. Indeed, he seems to essentially agree with Taubes that fat storage is driven by hormonal factors as part of overall metabolic regulation, and gives some examples where obesity results from failures in these regulatory mechanisms (which we'll also explore in subsequent posts in the Energy Regulation series). Despite this apparent agreement, Bray spends about 13 pages simultaneously trying to disagree with Taubes. Smells like cognitive dissonance, at least from my cursory reading. Taubes reply does a nice job at cutting through the fog.

Bray (like many others) seems to interpret Taubes work as somehow implying a violation or misunderstanding of the First Law of Thermodynamics, which is sort of humorous considering Taubes has a degree in physics from Harvard. Physics students pretty much get these sort of fundamental laws beaten into them from day one. In addition to Bray's review, there was a lot of noise about the First Law of Thermodynamics in response to the recently reported study about low-carbohydrate vs. low-fat diets. Being a physicist, I find misapplication of the First Law thoroughly annoying, so let's dig into this topic a bit and hopefully raise the level of understanding.

Use of the term "First Law of Thermodynamics" is a bit of historical accident. Bray actually uses the term I prefer, "Law of Conservation of Mass and Energy". Actually, "Mass" is redundant, since mass is just another word for energy, so let's shorten that to the "Law of the Conservation of Energy". The statement of energy conservation is simple: in a closed system, the total quantity of energy does not change. Energy may change "forms", e.g. the stored electrical chemical energy of battery can be converted to a light. But the total amount remains unchanged. The field of thermodynamics was largely developed in the 19th century, before we knew about atoms and such. We now understand that energy conservation in "thermodynamic systems" (consisting of very large numbers of atoms) simply follows from the more general law of energy conservation for all physical systems.

Why is energy conservation a "law"? There are many "conservation laws" in physics, and they all arise because of symmetries. Mathematically, physicists model the world via equations of motion, which basically tell how the state of the system under study changes as a function of changes in time and space coordinates. A "coordinate" is just a numerical label for a point in space (or spacetime). Suppose we're doing an experiment inside of a cubical box, 1 meter on each side. We might pick a point in the box, say the bottom left front corner, to be the "origin", labeled as (0,0,0). The top right back corner is then (1, 1, 1) in meters.

But this choice is arbitrary. I could just as easily pick any other point as the origin, say the front left corner of the parking lot, and update all of my other coordinate values accordingly. This is called a "transformation". Similarly, I could move my experiment box from it's original location. In neither case would I expect the experiment to have a different outcome. That's a symmetry: I changed one thing (coordinate origin, location of box), but it did not change the physics occurring inside the box. In this case, we would say the laws of physics are symmetric with respect to position.

A given symmetry in the equations of motion implies that some physical quantity is conserved, i.e. cannot change in a closed system. Symmetry with respect to position implies the conservation of linear momentum. Suppose I turn the box and observe the same outcome. This rotational symmetry implies conservation of angular momentum. Now let's do the experiment today, come back tomorrow, and repeat. If we get the same result, we have a time translation symmetry, which implies the conservation of energy. So basically, the "Law of Energy Conservation" arises from the observed fact that all of the fundamental equations of motion in physics are invariant under time shifts. It doesn't matter whether you look now or later, the laws governing how systems evolve in space and time are unchanged. Note that this is not the same as saying that the state of the system doesn't change, just that the laws which predict how the system goes from one state to another are not affected by the passage of time (this doesn't have to be true, it is just observed to be so in all cases so far).

Now, the above discussion is a bit watered down. The mathematically rigorous version is "Noether's Theorem", and involves differential calculus and continuous transformations. One of the best physics books I've read is Lagrangian Interaction, by Noel Doughty. Very technical, but a highly illuminating read on the power of symmetry in understanding the universe. There are many other fascinating and powerful applications of symmetry as well, one of my favorites being in Probability Theory. But we'll visit that another time.

So, to review: The First Law of Thermodynamics is just another statement of the more general Law of Energy Conservation. Energy conservation in a closed system arises because the laws of physics do not change with time. If you were to ever observe an apparent violation of energy conservation, it must be either that you are not observing a closed system (haven't taken everything into account), or you've discovered new laws of physics. The former is far more likely than the latter. For example, suppose you put some water in a cup, stuck in a thermometer, and put the whole she-bang into the freezer. The temperature would drop as time passed, indicating that the average energy of the water is decreasing. But this does not imply violation of energy conservation. Were you to also measure the net heat output from the freezer, you'd find the missing energy.

Back to our original story. Bray makes the point "Over the period of about 100 years from 1787 to 1896, the Laws of Conservation of Matter and Energy were shown to apply to human beings, just as they do to animals." That's a no-brainer given what we've learned above, since humans and animals are physical systems, ultimately governed by the same physical laws as the subatomic particles which comprise these systems. They didn't know about atoms and Noether's Theorem in the 19th century, so the explicit study of energy conservation in living organisms is understandable. But now it's not even a point of discussion, so I don't know why Bray (and so many others) keep lecturing about it. As far as anyone can tell, energy conservation is built-in to the fabric of the universe. The core issue isn't violation of this law, it's whether your metabolic theory or experiment has done a complete accounting of all energy inputs and outputs.

Energy enters the body in the form of food. In healthy people, the only way it can leave the body is through physical exertion or heat. Energy may be used in the body to fuel other biological processes ("base metabolic rate"), or it can be stored in various chemical forms. Misinterpretations seem to arise because there is an assumption that base metabolic rate and heat output are independent of caloric intake, and further independent of macronutrient composition. If you assume that intake is independent of storage and output, you can draw some strange conclusions. The body has ways of regulating total input, storage, and output in an attempt to maintain energy balance in a healthy range. As such, the output side must be related to the input side, otherwise energy regulation would be doomed to failure.

Consider a simpler example: drinking water. When we're thirsty, we drink water. The signal for thirst is generated in the brain as a function of the detected water content in the body. Too low, you get thirsty. But when you drink some water, it takes time for the water to get absorbed into the blood and signal the brain. So we tend to drink more water than we actually need; that's probably also a good evolutionary strategy, sort of "better safe than sorry". The body then has mechanisms to get rid of the excess, mostly as urine. The amount of urine we produce is clearly correlated to the amount of water we drink. If water output were independent of water input, we'd be in constant danger of either dehydration or water poisoning, depending on availability of water. Like food in Western society (and increasingly elsewhere), water is abundantly available, yet people aren't dropping dead from over-hydration because the input, usage, and output are regulated by the body. Why should we expect any different for energy regulation?

Like I said earlier, if it appears that energy conservation is violated in an experiment, such as the recent low-carb vs. low-fat diet study, the most likely explanation is that the experimenters did not measure all of the energy output. They did estimate physical activity, but it's more difficult to measure heat output. Similarly, Taubes is not saying "calories don't count", but rather that you must consider all methods of energy output when discussing energy balance. Further, you must consider the physiological mechanisms that control energy input, storage, and output, because that tells you relationships amongst them. When you do this, you find not only that output correlated with input, but also that the macronutrient composition potentially affects input, storage, and output as well. Macronutrients not only affect energy balance but other physiologically important quantities. Blood sugar, for example, is tightly regulated. If it goes too high or too low, the body has problems. So we would expect a different biological response if we eat the same calories as sugar or as fat, and of course this is exactly what is observed. It should not be surprising that high-carbohydrate or high-fat diets have very different effects on metabolism. Violation of energy conservation is not required to explain the results, just that the system has different responses to different inputs, and that the caloric content of food is only one aspect that is detected and monitored by the body.

Bray actually seems to agree with this point: "The concept of energy imbalance as the basis for understanding obesity at one level does not preclude any of the influences that affect or modify food intake or energy expenditure, including the quantity and quality of food, toxins, genes, viruses, sleeping time, breast feeding, medications, etc. They are just the processes that modify
one or other component of the energy-balance system." I think the fundamental disagreement may be whether fat storage depends sensitively on the precise balance between energy intake and output, i.e. that storage is driven by eating even a little too much. But that implies a pronounced lack of robustness in the regulatory system, one which is not observed, any more than it is in regulating water balance.

Anyway, the next time someone tells you that low-carb diets can't work because they violate the First Law of Thermodynamics, you can reply with "Low-carbohydrate diets exhibit continuous symmetry under time translation transformations, hence do not violate conservation of energy." That ought to shut 'em up.

Thursday, July 17, 2008

Even More Dissonance

A recently published study comparing various weight-loss approaches has been getting a lot of press and Internet buzz, probably because the results contradict mainstream thinking about diet and health. It's pleasantly surprising that this is getting some media coverage - usually such dissonance-inducing results are largely ignored. Regina Wilshire posted an especially amusing blog, showing how different people interpret these results. You can taste the cognitive dissonance, as each individual spins the results according to their own beliefs.

The essence of the study results is that those following a low-carbohydrate diet had greater weight-loss and improvements in blood lipids. The Mediterranean diet did well also. Both of these results are predictable from what we know about metabolic regulation, but for the mainstream, this result clearly induces significant dissonance. I particularly enjoyed Dean Ornish's attempt at reconciling this dissonance. Here's a choice quote:

I'm also very skeptical of the quality of data in this study. For example, the investigators reported that those on the "low-fat" diet consumed 200 fewer calories per day—or 10,000 fewer calories per year—than those on the Mediterranean diet, yet people lost more weight on the Mediterranean diet. That's physiologically impossible.

I think Dr. Ornish needs to bone up on his biochem. We'll hit this point later in the series on Energy Regulation, but the body very definitely has a mechanism to dump excess fat calories in the form of heat. And of course Ornish's calorie-centric focus completely ignores other regulatory effects, such as insulin's effect on fat storage. Ornish does spend plenty of time telling you all about himself, what he believes, why his particular diet flavor is superior, etc. The article reads more like an infomercial than scientific exposition. Comparison of different scientific hypotheses requires inclusion of ALL relevant evidence. Ornish heavily weighs evidence of his own creation, which (not surprisingly) supports his own preconceived notions. If you selectively weigh evidence in this way, you can come to any conclusion you want.

Here's another fun quote from Ornish: "Most people associate an Atkins diet with bacon, butter and brie, not a plant-based diet like the one I recommend." There's that "I" again. Shouldn't the diet be recommended by the evidence, not one individual? I guess Dr. Ornish is smarter than the rest of us. Maybe he would grace us with a more detailed explanation of why he's "right" given our knowledge of metabolic regulation at the molecular and cellular level?

I'm not holding my breath.

Ornish's comment also highlights one of the major origins of dissonance surrounding these recent results: the seemingly unshakable belief that saturated fat ("bacon, butter, and brie") plays a role in a wide range of disease processes. We saw in the original post on cognitive dissonance that there actually exists essentially no evidence of causality (I just confirmed this with an ex-official of the American Heart Association). For example, there may be some statistical association between saturated fat consumption and development of heart disease (particularly if you limit the observational data set), but there's no evidence at all of causality at the molecular and cellular level. Let's look at a some ways in which this association might arise:

  • Fast food is often high in saturated fat. It's also often high in total refined carbohydrates, particularly fructose. The damage wrought by increase carbohydrates (fructose is particularly good at this) and the hormonal derangement from repeated insulin spikes (and probably fructose as well) quite logically predicts an increase in heart disease. The likely high consumption of oxidized fats from deep-fried foods is the cherry on top of this sundae. Lipoprotein molecules are composed of a water soluble membrane including both proteins and fatty acids. White blood cells have a specific receptor for oxidized LDL (but not unoxidized LDL), so if your LDL includes some oxidized fat from your French fries, you should expect an increased immune response, which is known to be important in the development of atherosclerosis. So if a population has a high consumption of fast food, not only is their saturated fat consumption higher, so is the consumption of refined carbohydrates and oxidized fats. Which of these actually causes the observed increase in heart disease?
  • Grain-fed beef is known to have some nutritional issues. Grains are not the natural food of cattle, who prefer to eat leafy material, which tends to be rich in the omega-3 alpha-linolenic acid. When compared with grass-fed beef, grain-fed has a significantly higher ratio of omega-6/omega-3 fatty acids. There is a biochemical reason to believe this could increase heart disease, due to the pro-inflammatory effect of omega-6 fats. Grain-fed beef is also much higher in saturated fat, so there would be an association between saturated fat intake and increased omega-6/omega-3 ratio.
  • Grain-fed beef is also higher in total fat. Guess what - carbohydrates make cows fat too! But this fat is essentially "empty calories" in that the increased fat intake does not bring significant additional micronutrients, probably displacing calories from foods that are nutrient dense. Again, at the molecular/cellular level, there are good reasons to believe these micronutrients (like magnesium) are protective against the development of heart disease.
  • Eating a crappy diet like fast food makes people sick. Sick people tend to stay inside. If you don't go outside, in all likelihood you are deficient in Vitamin D. Vitamin D deficiency is implicated in a whole host of diseases, including heart disease. I'll bet saturated fat consumption is correlated with Vitamin D deficiency as well.

I'm sure with a little thought we could come up with several more. The point is this: associating causality with an individual statistical correlation is a very slippery slope. If you have no evidence for causality, making such an association implies that you are ignoring other possible causes WITHOUT EVIDENCE. Attempting to treat sick people based on this association could be expected to be ineffective at best, harmful at worst. And of course you wind up with the precise situation we observe today, which is that some bogus dogmatic belief blocks the advancement of science due to cognitive dissonance.

Sunday, July 13, 2008

Energy Regulation 2: Appetite

In Energy Regulation 1, I asserted that the body had many regulation mechanisms for energy intake, storage, and utilization. This regulatory network presumably evolved to maintain health over a wide range of conditions: different seasonally available foods, physical requirements, hot and cold temperatures, etc. Let's start digging in to the details of what is known, which will then should inspire some ideas on what aspects of modern life could potentially knock things askew, resulting in a situation where the body actually defends an unhealthy state like obesity.

A few caveats first. Metabolic regulation is a complex and evolving subject, and much of the knowledge is very recent (if you want to give yourself a headache, check out this spreadsheet I made trying to illustrate the various parts and their relationships). Even if you were to consider all of the available science I doubt the picture is anywhere near complete, and of course I've probably only been exposed to some smallish subset of what is known. If anybody out there finds gaps in this presentation, please fill them in via the comments. Additionally, much of the research on metabolic regulation is done on animals and extrapolated to humans. Nobody is going to do experiments where, say, they directly infuse oleic acid into the brains of people. Some of the published reviews are unfortunately vague as to whether the mechanisms discussed have been studied in humans.

One final issue is that the reviews are very focused on dietary fat, and to a lesser extent carbohydrates. But protein is essential for life, so there must be appetite and metabolic controls regulating protein intake, but this is largely not discussed. For instance, I'm guessing somewhere in the body there's something that detects amino acids and influences appetite, particularly preference for protein-rich food.

With that in mind, we'll start at the beginning. Animals eat when they're hungry. In a healthy organism, hunger is a signal that available and/or stored energy is getting low and need to be replenished. Humans have three primary energy stores: the stomach, glycogen (starch) in muscle and liver tissue, and fat (triglycerides) in adipose tissue. Now, if this system is working right, low available energy should be equivalent to low stored energy. But we're going to see it's quite plausible that conditions can arise where the body thinks available energy is low, yet excess energy is in storage.

The brain acts as a central controller, receiving various signals from the body and adjusting many different "knobs" to maintain a healthy state. Peripheral tissues also exercise some independent controls as well, e.g. the pancreas will secrete insulin in response to rising blood glucose without nervous system control. This combination of central and peripheral controls provides for both robustness and responsiveness.

The major nervous system player in metabolic regulation is the hypothalamus, an area at the base of the brain, roughly the size of an almond. The hypothalamus is the main connection between the rest of the brain and the various hormone systems of the body, sharing a private circulatory system with the pituitary gland, and projecting nerve connections to various other endocrine organs as well. The hypothalamus is also well situated to sample various chemical concentrations in the blood. Most of the brain is protected by the "blood-brain barrier" (BBB), closely-packed cells which tightly control what substances pass from the blood to the brain. But the hypothalamus is located near a region where the BBB is incomplete. It's leaky, in a sense, so the hypothalamus gets a taste of much of what's in the blood. The hypothalamus can be further divided into "nuclei", which have different sensory and control functions. Of particular interest here are the arcuate nucleus (ARC), the ventromedial nucleus (VMN), and the dorsomedial hypothalamus (DMH).

The brainstem is a close neighbor of the hypothalamus, sharing lots of neural connections. The particular region called the nucleus of the solitary tract (NTS) is the termination of the afferent fibers of the vagus nerve (afferent nerves cause signals to arrive at the brain; efferent nerves allow signals to exit the brain). The vagus nerve connects to many different organs, including those of the digestive system. The NTS appears to integrate different signals (both hormonal and nervous) and send them along to the hypothalamus. The hypothalamus does some additional integration, and projects to other brain areas involved in behaviors like finding food and eating it.

I've used the term "integrate" a couple of times. What does that mean? The neurons in the brainstem and hypothalamus receive many different signals: from other nerves, from hormones like insulin, and can directly sense nutrients like glucose. The "decision" of whether the neuron fires or expresses certain proteins must factor in all of these signals. For example, the brain requires a certain blood glucose concentration to function properly. If glucose falls, regardless of the level of insulin, the brain should take some action (like stimulating appetite), because otherwise you'll die.

The ARC in particular contains two populations of special neurons. One of these expresses cocaine- and amphetamine-related transcript (CART) along with pro-opiomelanocortin (POMC). These neurons seem to be associated with appetite suppression. For instance, POMC can be chopped up to yield alpha-melanocyte-stimulating hormone (alpha-MSH), which in turn binds to the melanocortin-4 (MC-4) receptor. Genetic problems causing defects in the MC-4 receptor result in obesity characterized by overeating. The other population expresses agouti-related protein (AgRP) and neuropeptide-Y (NPY), both of which increase appetite. If NPY is infused into rat brains, they respond with a several-fold increase in food intake that lasts 6-8 hours, similar to rats that have been fasted for 36-48 hours.

Having two opposing systems (as opposed to just one that gets turned up or down) allows for rapid fine-tuning of metabolism; this idea of opposing systems which maintain balance is found elsewhere, e.g. the sympathetic and parasympathetic endocrine systems. Both classes of neurons appear to "detect" both available energy in the blood as well as hormonal levels and probably nerve signals, with opposing results. Energy nutrients go through part of the same cycle used to actually generate energy, and the resultant metabolic products appear to trigger opening/closing of ion channels on the cell membrane. Hormones like leptin and insulin have similar effects, hence the "integration" of these signals. Changing the balance of ions inside and outside the neuron affect the "action potential", make it more or less susceptible to firing, expressing proteins, etc.

NPY neurons, for example, are glucose inhibited (GI), meaning the more glucose is around, the less active they beome. If blood sugar falls, the NPY neurons become more active, and as we saw above, NPY appears to strongly stimulate appetite. So blood sugar falls, and you get hungry. Similarly if insulin or leptin falls, these neurons are activated, and again you get hungry. But what if insulin is high AND glucose is low? Well, the brain needs a certain glucose level to operate, so I would guess that the low glucose wins, because the alternative is a hypoglycemic coma and death. Have you ever had a major blood-sugar crash a few hours after a large carbohydrate-laden meal? It's the "Chinese food makes you hungry an hour later" thing (see e.g. Teriyaki Stix Beef Bowl: 102g of carbohydrate, probably all highly refined). I would bet the extreme feelings of hunger (kind of like you were starved for 36-48 hours) is the result of increasing NPY concentrations in the hypothalamus, in turn triggered by low blood glucose, even though your insulin is still elevated. Rats show a preference for high-carbohydrate meals when stimulated with NPY. If the same is true for humans, then we shouldn't be surprised that a blood-sugar crash sends us scurrying for the vending machine to fearlessly slay and consume a candy bar, regardless of how much energy is stored in the stomach or fat. So we begin to see how the system can be broken to store excess energy, mainly fat.

The scenarios described above relate more to the instantaneous availability of energy in the blood as opposed to the amount stored. The major energy store (in terms of calories) is white adipose tissue (WAT). Fat cells, or adipocytes, are not passive buckets, but rather metabolically active both in the storage/release of fatty acids as well as the secretion of hormonal signals relating to appetite and metabolic regulation. The best-known of these is leptin, a hormone whose secretion is proportional to the amount of stored fat. Leptin suppresses appetite, probably via multiple actions. Leptin inhibits the NPY/AgRP neurons (which stimulate appetite) and excites POMC/CART neurons (which decrease appetite). Leptin also slows gastric emptying, the rate at which food leaves the stomach and enters the small intestine. So more leptin (everything else being constant) should keep the stomach fuller for a longer time, and the stomach of course sends it's own signals relating to appetite and satiety. Leptin may also increase base metabolic rate via diet-induced thermogenesis, a topic we'll explore later. There is a genetic defect that causes people to secrete little or no leptin. Individuals with this genetic problem tend to overeat considerably, and extrapolating from rats may additionally have a lower metabolic rate, with the result of extreme obesity. Administration of leptin to these individuals substantially aids this condition.

Fat cells secrete other hormones as well. Adiponectin secretion is inversely correlated with stored fat: more fat, less adiponectin, and vice versa. Adiponectin has potentially influences many things, including appetite, insulin sensitivity, inflammation, and vascular function. Interleukin-6 (IL-6) causes insulin resistance in fat cells, which tends to make them release fat instead of store fat. The hypothalamus also expresses and contains receptors for IL-6, particularly in areas controlling body composition. Fat cells also express tumor necrosis factor alpha (TNF-alpha), which inhibits lipoprotein lipase (the enzyme required to get fat out of lipoproteins and into fat cells), stimulates breakdown and release of triglycerides in fat cells, and may also induce insulin resistance.

So we see mechanisms in place to control fat storage through appetite. As more fat is stored, more leptin is secreted, which should blunt the appetite. As fat is lost, leptin levels drop, which should promote appetite. Leptin (and other hormones from fat cells) may additionally modulate metabolic rate, to encourage fat burning when there is an excess, and discourage it during a deficit. So again there's a lot of knobs to turn, all aimed at maintaining fat storage in a particular range.

Our final stop is the gastrointestinal (GI) tract along with the closely related pancreas. When we eat, food hits the stomach, which does a nominal amount of digestion both mechanical and chemical. The stomach represents short-term energy storage, more or less the "gas tank" for the body, and so it's no surprise that the stomach is involved in appetite as well. Indeed, most people think of appetite in terms of "my stomach is full/empty", but we've seen above that many other factors come in to play as well. The stomach signals the full/empty state both through nerves and hormones. Stretch receptors on the stomach wall send signals via the vagus nerve indicating fullness. The stomach also secretes the hormone ghrelin, which strongly stimulates appetite. Empty stomach means more ghrelin, full stomach means less. Increasing ghrelin increases brain concentrations of NPY. So an empty stomach definitely tends to increase your appetite, but gastric signals must be integrated with the variety of other signals to actually determine the degree of appetite stimulation.

Most of the hormonal action occurs in the small intestine and pancreas, and indeed there is some interplay between these organs. The pancreas is not only an endocrine organ (which sends hormones into the blood), it is also exocrine, emitting various substances like enzymes and bile salts require to break down food so it can be absorbed through the small intestine. The small intestine itself secretes a several hormones in various quantities, depending on the total caloric content as well as the individual levels of carbohydrate, protein (really amino acids), and fat. These hormones have a wide variety of effects, including stimulation/inhibition of pancreas endocrine and exocrine functions, modification of the rate at which food passes through the GI tract, metabolic control, and of course appetite. I'm not going to cover nearly all of these hormones or their effects. Check out the spreadsheet, or this paper and this paper for details.

A major hormone secreted by the small intestine is cholecystokinin (CCK, and no, I don't know how to pronounce it). Dietary fat and protein more potently stimulate of CCK release than does carbohydrate, and long-chain fatty acids seem to have a greater effect than short-chain. CCK affects a number of systems, e.g. inducing gallbladder contraction (to release the bile needed to digest the fat which stimulated CCK release in the first place). CCK also slows gastric emptying. Again this makes sense from a regulatory standpoint. Once the small intestine has received some energy nutrients, CCK signals the stomach to stop sending more until the present batch is done processing.

CCK also strongly suppresses appetite. In rats, administering CCK reduced food intake in a dose-dependent manner: more CCK, less food eaten. The exact mechanism is unclear, but it seems to be a combination of reduction in gastric emptying (stomach stays full) and detection by the nervous system. In both monkeys and humans, the fullness of the stomach seems to modulate the appetite suppression of CCK. The afferent fibers of the vagus nerve as well as the brainstem express CCK receptors. The Otsuka-Long-Evans-Tokushima fatty rat (try saying that 3 times fast) is a genetic variant which lacks the CCK-1 receptor, and both overeats and becomes obese.

A few notes on other GI hormones. PYY-36 is released in proportion to calories and meal composition, with fat resulting in higher concentrations than protein or carbohydrate, and may inhibit food intake. Glucagon-like peptides GLP-1 and GLP-2 are cleavage products of preproglucagon. GLP-1 increases insulin secretion and suppresses glucagon release. It also slows gastric emptying and inhibits food intake. Key areas of the brain such as the ARC express GLP-1 receptors. GLP-2 release is potently stimulated by fat and carbohydrates, and may enhance the digestive and absorptive capabilities of the small intestine. Oxyntomodulin (OXM) is released in proportion to calories ingested. It suppresses appetite and gastric motility, enhances insulin secretion, decreases food intake, and possibly increases metabolic rate.

So the takeaway here is that the GI tract sends numerous hormonal signals indicating energy is present and being absorbed, please don't send any more. One interesting side-note is that the levels of some of these hormones, notably PYY-3-36, GLP-1, and OXM, all increase after gastric bypass surgery. The effect of this should be to suppress appetite, and possibly increase metabolic rate, which would explain the success of such surgeries to reduce obesity. I find this interesting, because by itself I would guess reduction in stomach size should probably have little effect on overall food intake because of the other mechanisms regulating appetite based on stored and available energy. But diddle the relevant hormones, and voila, sustained appetite reduction and weight-loss. Hopefully the increasing understanding of these regulatory mechanisms will give rise to better treatments, since surgery seems an extreme way of accomplishing the desired effect.

Finally, we come to the pancreas. The best-known pancreatic hormone is insulin, arguably the Big Mama of metabolic regulation. It is interesting to note that the protein structures of both insulin and NPY are remarkably conserved across evolution. If you look at a primitive animal like a hagfish, it's insulin and insulin receptors are fairly similar to that of humans, so much so that hagfish insulin significantly stimulates human insulin receptors. The implication is that the role of insulin is central in metabolism and development, and fairly successful as relatively drastic changes across species required little mutation of insulin. We're most familiar with insulin's role in controlling blood sugar, both by increasing tissue uptake of glucose and by regulating glucose output from the liver. Insulin also regulates many other aspects of metabolism, like fat storage and cell division. Subsequent posts will visit these in greater detail.

Insulin is manufactured by the pancreatic beta-islet cells (B-cells). When glucose enters the B-cell, it is metabolized to ATP, the primary short-term "energy currency" of the body. But rather than using that ATP for energy, some of it closes potassium ion channels. This depolarizes the cell membrane, allowing calcium ions to enter the cell and causing stored insulin to be released. The presence of glucose in the cell additionally signals the cell to manufacture more insulin. Amino acids also trigger insulin release to varying degrees, depending on the particular flavor, as do ketone bodies. The effect of fatty acids is complex and not well understood. It appears that fatty acids are necessary for normal glucose-stimulated insulin secretion. Increasing fatty acid concentrations in the short term (1-2 hours) will cause more insulin to be released for a given glucose concentration. But long term, elevated fatty acids impair insulin secretion. Both the nervous system and other hormones also affect the amount of insulin released.

Insulin affects appetite, both directly and indirectly. The indirect path involves sensitization of the body to other satiety signals like CCK (a role shared with leptin). Insulin also appears to directly signal the hypothalamus, increasing activity of POMC/CART neurons and decreasing activity of NPY/AgRP neurons. We all know that the pancreas secretes insulin in response to blood glucose, but insulin secretion is also modulated by a number of other factors. We saw above how some GI hormones potentiate greater insulin release (the so-called incretin effect). Insulin levels are also a function of body-fat: the more fat that is stored, the higher insulin is in all states (fed, fasting, etc.) So insulin signals both energy availability and energy storage, but the primary effects indicate that over the long term insulin (along with leptin) signal how much fat is stored.

If insulin is infused directly to the brain (of a rat, presumably), the result is a decrease in food intake and loss of body weight in a dose-dependent manner. If insulin receptors are blocked, food intake and body weight increase. When insulin levels in the brain are held constant over long time periods via slow infusions, animals modify their diet and body composition until a certain body weight is achieved, and that weight is subsequently defended at a level determined by the insulin concentration.

We saw an example above where high insulin could be overridden by low blood glucose to cause hunger. Insulin suppresses appetite only when blood glucose is maintained at a proper level. Insulin-induced hypoglycemia (whether from a high-carbohydrate meal or administration of insulin) triggers an override mechanism in the brain, inducing hunger and eating to avoid going into a coma. Type 1 diabetics have the opposite problem: high blood glucose and low insulin. Type 1 diabetics are typically ravenously hungry despite high glucose, again showing the integrative capacity of the brain; yet they will fail to gain weight regardless of how much they eat, as the lack of insulin disrupts other metabolic functions.

The pancreatic B-cells also co-secrete another hormone called amylin. Insulin and amylin a secreted in a fixed molecular ratio of about 10 to 100 to one. Various disease states (including obesity) and pharmacological interventions increase the amount of amylin relative to insulin. While insulin appears to primarily signal stored fat levels, amylin signals both the amount of stored fat and energy availability from food intake. Amylin is secreted in proportion to body fat and meal size. Giving rats a does of amylin prior to a meal reduces meal size. Blocking amylin receptors produces a long-lasting increase in food intake and fat storage. Amylin appears to act in the area postrema (AP) of the hindbrain. AP neurons activated by amylin are also activated by glucose, CCK, and GLP-1, and the AP projects to the NTS, which in turn projects to areas of the hypothalamus regulating appetite and metabolism.

Type I diabetes occurs due to destruction of the pancreatic B-cells, so Type I diabetics also lack amylin, which is thought to contribute to their large appetites. Type II diabetics treated with insulin often gain more weight (duh), but this can be mitigated be treatment with an amylin analog. Some doctors are prescribing amylin analogs in obese patients who are not being treated with insulin. Since amylin serves both as a satiety an adiposity signal, this works as expected: these people both eat less and lose fat. But just as most obese people are insulin resistant, they're also amylin resistant. Administration of amylin to an already overtaxed system is, I think, a questionable long-term strategy. Additionally we know the body wants to keep the insulin/amylin ratio fixed, probably for a good reason. Adding exogenous amylin to the mix perturbs this balance even more than it already is, rather than helping to restore it to a healthy state.

Last, but not least, is glucagon, manufactured and secreted by pancreatic alpha-islet cells (A-cells). Metabolically, glucagon tends to counter the effects of insulin, e.g. increasing glucose output from the liver. Glucagon secretion is stimulated mainly by protein, possibly by fat, and not at all by carbohydrate; indeed, glucose inhibits A-cell glucagon secretion. Pancreatic hormones are dumped into the portal vein, so the liver gets first shot at them. Apparently the liver removes most of the glucagon, and rather little makes it into systemic circulation. Even so, glucagon acts as a satiety signal. Rather than acting directly in the brain, glucagon's action probably occurs in the liver, which then sends a signal to the brain via the vagus nerve. Animals whose afferent vagal nerves have been blocked do not have their feeding inhibited by glucagon.

So let's see how some different meals may affect appetite. We'll revisit these later, after we've gone through the other aspects of metabolic regulation and can look at the big picture; but the isolated effects on appetite are still interesting. These are my guesses, not proven by any scientific research. Feel free to add your own scenarios to the comments.

We discussed above what may happen after a high-carbohydrate low-fat meal, like the Teriyaki Stix Beef Bowl (102g carbohydrate, 33g protein, 7g fat). The stomach fills, reducing ghrelin secretion and sending the "full" signal to the brain. The large amount of refined carbohydrates should elicit a large insulin and amylin response, further potentiated by the release of hormones like GLP-1 and OXM. The protein in particular stimulates CCK release, which along with insulin, amylin, and other GI hormones suppress appetite. But the major insulin release induces hypoglycemia. The initial effects probably are an increase in gastric emptying via nervous system control to try and balance out the blood sugar, but of course this tends to raise insulin even more. Sooner or later the depressed blood glucose causes an increase in brain NPY, and powerful hunger, despite the fact that rather little of the meal may actually have been used for energy.

How about a "healthy" low-calorie meal, maybe a really big salad, lots of veggies and fat-free dressing. The conventional wisdom is that the large fiber load (and amount of water) fills up the stomach, contributing to satiety. That's true, to a certain extent, as filling the stomach triggers both the stretch receptors and reduces ghrelin. But the relative lack of any energy nutrients implies correspondingly low secretion of appetite control hormones like CCK. The brain also will not detect much rise in blood sugar or fatty acids. In turn, gastric emptying and intestinal motility is not inhibited and may in fact be accelerated, so the stomach empties faster than it would in a high-calorie meal. Appetite suppression from stomach distension rapidly fades, and you're hungry again.

How about a "cardiac arrest" meal of a big steak smothered in mushrooms and butter? This is a calorically dense meal, probably occupies considerably less stomach volume than the big salad, so maybe we don't get as much from stretching the stomach. But once this hits the small intestine, we get should get lots of hormones like CCK and glucagon to suppress appetite and gastric emptying. Some insulin and amylin are secreted as well. The glucagon helps keep blood sugar stable, and the additional protein from the steak may temporarily bump up blood sugar as well. The fat makes it into the circulation more slowly, and should help to both suppress appetite and gastric emptying over the longer term. Additionally, fat sensing by the hypothalamus also help the liver regulate blood sugar, so we don't get the "low glucose" override.

So to summarize:
  • The high-carbohydrate high-calorie fast-food meal makes you get hungry faster due to insulin-induced hypoglycemia. This happens despite consumption of plenty of energy.
  • The low-calorie low-glycemic salad fills you up in the short term, but you get hungry again quickly simply due to lack of available energy.
  • The high-fat high-calorie meal suppresses appetite for a longer time, both by avoiding adverse conditions like hypoglycemia, as well as providing a measured release of energy into the blood via the small intestine.
Now I ask you, which of these meals is the most likely to cause fat gain?

These examples are interesting (so I think), but again must be considered in the larger context of metabolic regulation. Obesity is not a simple result of overeating, fat-loss not the simple result of undereating. Both are a combined effect of different regulatory mechanisms. Appetite is just one piece of the puzzle. Genetically-modified rats, for example, illustrate different behavioral and physical outcomes depending on the nature of the mutation. Some overeat and maintain normal body weight. Some eat normally and get fat, and some both overeat and get fat. Conversely, to lose fat almost certainly requires restoration of the proper regulatory balance. By itself, the recommendation to "eat less and move more" is meaningless. We need to consider the effect on the hormonal and nervous system mechanisms, which require detailed thinking about the effects of food and exercise on human biochemistry.