Friday, May 21, 2010

Alzheimer's and RAGE

Something I wrote in an email a while ago . . .

Advanced glycation endproducts (AGEs) are the endpoints of some complicated chemistry that occurs when simple sugars (glucose, fructose, etc.) react with proteins (and apparently fats too). They’re toxic for a variety of reasons, and trigger an inflammatory response via the receptor for advanced glycation endproducts, or RAGE.

It turns out that RAGE binds to a whole bunch of things, and amongst them is the amyloid beta peptide, which is implicated in the development of Alzheimer’s. Amyloid beta is apparently produced via neural activity. I can’t figure out if it has a function or is just a by-product. I suspect it has some function, because the body has a mechanism for achieving a balance in the central nervous system (CNS). One kind of receptor (LRP) causes active transport out of the CNS to the blood, while RAGE triggers transport from the blood to the CNS across the blood-brain barrier. More RAGEs means you’ll have more amyloid beta in your brain. I couldn’t verify this, but I would guess that insulin drives the formation of RAGE. It makes sense, as your body would be preparing for glycation damage (more AGEs) from increased blood sugar, whether the source was food or glucose released due to stress. And indeed, diabetics have higher concentrations of RAGE (as do the blood vessels in the brains of Alzheimer’s victims).

We learned today that stress actually increases amyloid beta production in the brain, via the action of corticotrophin releasing factor, or CRF. I got in contact with one of the authors of that study and he was nice enough to send me a reprint of the paper. It’s a pretty solid piece of research. Amongst other things, they showed that the more you stress mice, the more amyloid beta is produced. They could introduce CRF directly into the brain, and observe increased amyloid beta production. They could block the action of CRF, stress the mice, and see that less amyloid beta was produced. And finally they could directly block neural activity, and either stress the mice or introduce CRF, and again would see reduced amyloid beta. So it was a pretty solid case, albeit in mice. It would be surprising if humans turned out to be much different, though it’s certainly possible. CRF is released as part of the stress response. It is also released as a result of insulin-induced hypoglycemia, i.e. insulin goes up, blood sugar crashes, CRF pumps out.

One last piece of the puzzle: by itself, amyloid beta is soluble, and shouldn’t form solid plaques (or at least do so slowly). But test-tube experiments show that formation of solid “fibrillar aggregates” of amyloid beta are accelerated if you provide seeds of altered amyloid beta. And what’s one form of the alteration? Glycation damage from sugar.

So, less than surprisingly my hypothesis is that the route to Alzheimer’s mirrors that of heart disease. A high-carbohydrate diet leads to the following effects:
  1. Increase in density of receptors for advanced glycation endproducts, which leads to increased amyloid beta concentrations in the brain.
  2. Release of CRF, which increases production of amyloid beta in the brain.
  3. Damage to amyloid beta, which increases the formation rate of solid aggregates, which may be contributory toward the formation of the plaques associated with Alzheimer’s.
And of course, there’s the usual feedback between stress and diet: psychosocial stress makes you want to eat more carbohydrates, which makes you more stressed, etc.

Monday, May 10, 2010

When Listening to Scientists, Be Sure to Check Their Shoes

During college, I worked on and off as an intern at IBM Boulder. I remember when I changed departments, to work on management software for the facilities group (whose job it was to keep track of the walls and such - seriously, not as simple as you'd think). One of the senior guys named Tom took me out for my inaugural trip to the coffee machine. Back then we didn't have nice coffee set-ups like many companies do now, just a machine that gave you little paper cup of battery acid for a quarter. As we approached, there were some people ahead of us a the machine, including one of the managers I had just met. "You're gonna owe me a coffee," said Tom.

"Uh, okay," I said, thinking it was some new guy tradition to buy coffee. "Why?"

"See that guy?" asked Tom, indicating the manager.

"Sure, " I replied.

"Check his shoes."

I dutifully looked at the shoes. Seeing nothing out of the ordinary, I asked "What about his shoes?"

"They're full of shit."

I bought the coffee.

When you're getting information from scientists or other "experts", there are some good signs that indicate when a shoe check might be needed (to see what they're full of). One of the best is when scientists argue for/against a particular hypothesis by lecturing about the scientific method, rather than actual evidence. Usually this is a bitch-fest about how opponents of their views are unscientific self-interested boobs, while casting themselves like Gandalf on the Bridge of Khazad-dûm (paraphrasing a bit):

You cannot pass! I am a servant of the Secret Fire, wielder of the Flame of Science. The dark fire will not avail you, Flame of Dumb-Dumb! Go back to the shadow. You shall not pass!


Riiiiiight.

(Of course, since I spend a good chunk of this blog lecturing about scientific method, maybe I should check my own shoes :-)

I recently came across a couple of excellent examples of exactly this phenomenon, and thought we'd all benefit (and maybe get a good laugh) from checking the shoes of those involved. The first is T. Colin Campbell's "review" of the latest Atkins diet book. I haven't read the book, and am no particular fan of Atkins over any other diet, beyond the fact that it applies well-understood metabolic principles to achieve predictable results. And I won't spend time dissecting Campbell's review. He doesn't say anything that amounts to much beyond the Gandalf quote above (I can't shake this mental image of Campbell on the bridge, wielding a carrot and handful of wheat against a cow with a platter of bacon on its back). Jimmy Moore already did a great job of chewing up Campbell's argument, so I'll direct you there and to the links within (definitely see also Chris Masterjohn's review of "The China Study", and Campbell's unintentionally humorous reply). I just find it funny that Campbell is lecturing anybody about the scientific method, when he seems to apply it selectively, if it all. For instance, see his discussion about his personal "scientific philosophy" and "holistic" approach in The Protein Debate. I think it's pretty clear that Campbell is a conditional fan of the "scientific method," as long as it leads you to conclusions that agree with his own.

BTW, if you haven't read The Protein Debate, you should. For a long time you had to pay for access, but now it seems to be available for free. Loren Cordain provides a review of a lot of interesting evidence ranging from archaeological to biological, along with tons of references. Cordain has his own axe to grind, of course, so don't be fooled into thinking he's giving the whole picture. But he certainly provides a lot more background (164 references) than Campbell (0 references). Funny that Campbell complained in his Amazon review that Atkins never published a peer-reviewed paper and lectured on the requirement of peer review in "real" science (shoe check), yet neglects to reference said when arguing his own position. Read Campbell's part in the debate for lots of "check his shoes" examples. Plus it's great fun to see Campbell get handed his own ass - on a platter, with a side of bacon.

The second example is a letter to Science Magazine, entitled "Climate Change and the Integrity of Science". According to the guardian.co.uk,

A group of 255 of the world's top scientists today wrote an open letter aimed at restoring public faith in the integrity of climate science.

In a strongly worded condemnation of the recent escalation of political assaults on climatologists, the letter, published in the US Journal Science and signed by 11 Nobel laureates, attacks critics driven by "special interests or dogma" and "McCarthy-like" threats against researchers. It also attempts to set the record straight on the process of rigorous scientific research.



Wow, 255 scientists including 11 Nobel laureates? That's a lot of shoes to check. And we'll have to check those of Nobel winners twice.

The letter actually gets off to a good start:

We are deeply disturbed by the recent escalation of political assaults on scientists in general and on climate scientists in particular. All citizens should understand some basic scientific facts. There is always some uncertainty associated with scientific conclusions; science never absolutely proves anything. When someone says that society should wait until scientists are absolutely certain before taking any action, it is the same as saying society should never take action. For a problem as potentially catastrophic as climate change, taking no action poses a dangerous risk for our planet.


Clearly you cannot wait until uncertainties are resolved before making choices about how to deal with the possible outcomes of those uncertainties. And in theory, science is all about performing inference in the face of uncertainty, understanding how incomplete information about the world informs beliefs about competing hypotheses. Alas, the letter ruins this excellent start by espousing the opposite course, demanding that we should agree with their "facts":

Scientific conclusions derive from an understanding of basic laws supported by laboratory experiments, observations of nature, and mathematical and computer modeling. Like all human beings, scientists make mistakes, but the scientific process is designed to find and correct them. This process is inherently adversarial—scientists build reputations and gain recognition not only for supporting conventional wisdom, but even more so for demonstrating that the scientific consensus is wrong and that there is a better explanation. That's what Galileo, Pasteur, Darwin, and Einstein did. But when some conclusions have been thoroughly and deeply tested, questioned, and examined, they gain the status of "well-established theories" and are often spoken of as "facts."

For instance, there is compelling scientific evidence that our planet is about 4.5 billion years old (the theory of the origin of Earth), that our universe was born from a single event about 14 billion years ago (the Big Bang theory), and that today's organisms evolved from ones living in the past (the theory of evolution). Even as these are overwhelmingly accepted by the scientific community, fame still awaits anyone who could show these theories to be wrong. Climate change now falls into this category: There is compelling, comprehensive, and consistent objective evidence that humans are changing the climate in ways that threaten our societies and the ecosystems on which we depend.


Oh brother, how much self-aggrandizing hyperbole can you pack into two paragraphs? Right off we get the lecture on the scientific method. The authors compare themselves to Galileo, Pasteur, Darwin, and Einstein (such name-dropping is another indication a shoe-check is required). The comparison with other "well-established" theories also needs some examination in comparison with the anthropogenic global warming (AGW) hypothesis:
  • The Big Bang (or whatever process created the Universe), formation of the Earth, and evolution have all occurred already. For that matter, so has significant climate change on Earth, without help from human beings. What we don't have is a way of testing specific predictions about the behavior of a very complex nonlinear system, namely that human behavior is the driving force behind the recently observed global temperature variations, and that changes in human behavior can alter the course of future climate change. Big difference.
  • The Big Bang, while "well-established" in the minds of physicists, is really only well-established in a semi-dogmatic sense. There are fairly major holes in the theory, in terms of predictive power. The current hypothesis required for getting from a Big Bang event to the Universe observed today ("inflation") has no evidential support - at all. It may be the best hypothesis we have at this point, but there's plenty of room for it to be supplanted by new information (and it wouldn't require much). The example is the most appropriate one for comparison to the AGW hypothesis, though for reasons opposite what the authors intended.
  • Estimates of the age of the Earth leverage some other very basic "facts", amongst them that statistical behaviors of radioactive elements are observed to be the same every time we look. The nucleus of an atom on the Earth largely can be treated as an isolated system: it doesn't have a whole lot of complex interactions with the environment, in particular there really aren't any nonlinear feedback loops or other dynamical behavior to consider when doing radioactive dating. Inference of the age of the Earth can then be performed with some accuracy, as the relevant "givens" and observations don't admit much uncertainty. By contrast, global climate has many MANY interacting variables, most of which we probably don't even know about yet, and considerable uncertainty underlying the ones we do know about. It is difficult to see how any specific prediction of the future dynamic behavior of global climate could be as accurate as that for the past behavior of radioactive elements that have been sitting around in a rock for billions of years.
  • Evolution is about as close to a "fact" as you're going to get. First of all, it effectively follows from a combination of the "laws" of thermodynamics (mainly the first and second) and the ability of a system (whether it is a molecule or a complex organism) to a) maintain a relative narrow set of states against environmental fluctuations, and b) reproduce itself at a rate greater than it's destruction. Evolution is just math, in the end. And of course, it is observed repeatedly in the laboratory and Nature. There may be many specific models that predict different evolutionary endpoints, or routes by which currently observed endpoints were achieved. But the fundamental phenomenon, that mutable self-reproducing systems will evolve, applies to all of these models, and all predictions are necessarily consistent with this "meta-behavior". By contrast, global climate is an instance of a specific system, which we model given what (very little) we know about the intertwined physical, chemical, and biological systems on the Earth, and continued warming is a specific prediction of that model. As climate is a system showing chaotic behavior across many timescales, it may be fundamentally unpredictable, for all practical purposes. So calling this prediction a "fact" is stretching thin even the approximate definition of "fact" made by the authors.
The letter goes on to state a variety of "facts" or "conclusions" which the authors imply are more or less incontrovertible, which would seem to contradict their initial points about uncertainty and the scientific method. I think the key problem here (and in most science) is the idea that there is any "conclusion" in science. The only real conclusion is the relative belief in one hypothesis over competing hypotheses, as opposed to a specific identification of "truth". But standard statistics is completely backwards on this point, instead testing if observed data are likely given that a hypothesis is true. It's not the likelihood of the hypothesis being tested, but that of the data. The truth of the hypothesis is assumed in this analysis. So when a scientist finds that their data is strongly consistent with the observations, they "conclude" the hypothesis is a "fact". But that ignores both any prior information (similar to "black box" diet studies which don't include knowledge of metabolism in assessing outcomes) as well as competing hypotheses. Your pet hypothesis might be consistent with the data at the 99% level, but if mine is 99.9% consistent, and further more consistent with other prior information, then it is more likely to be true. By not quantitatively assessing competing hypotheses, the authors of the letter are guilty of exactly the sort of "hiding heads in the sand" behavior of which they accuse their detractors:

We also call for an end to McCarthy-like threats of criminal prosecution against our colleagues based on innuendo and guilt by association, the harassment of scientists by politicians seeking distractions to avoid taking action, and the outright lies being spread about them. Society has two choices: We can ignore the science and hide our heads in the sand and hope we are lucky, or we can act in the public interest to reduce the threat of global climate change quickly and substantively. The good news is that smart and effective actions are possible. But delay must not be an option.


I think everybody involved here is "ignoring the science" in one way or another. Threats of criminal prosecution is the sort of idiot knee-jerk response made by politicians, who, incapable of thinking for themselves, blindly follow the "expert du jour". When it turns out the politician made stupid and shortsighted decisions based on "expert" advice, they want to turn on the expert rather than accepting responsibility for acting like an idiot. Physician, heal thyself!

But the authors of this letter are no better. AGW proponents seem to ignore the elephant in the living room: the climate is probably going to change at some point whether or not human activity has anything to do with it. If anything is going to doom humanity, it is our anthropocentric view, that we are the masters of the Earth, able to bend Nature to our will. History shows that environmental conditions are large unstable, requiring organisms to adapt or die. We clearly should not ignore the possibility of climate change and the effects it will have on human life. But should we focus our resources on trying to force Nature to behave as we wish (and probably failing over the long term)? Or is it better to learn from history, assume that change is coming, and figure out how we will adapt to Nature's whims? I'm guessing the personal goals of the "scientists" aligns strongly with one of these scenarios, not so much the other.

And that's the real issue with both examples: the gap between the personal goals of those providing information and the goals of the receivers of that information. I've discussed this before, more in the context of organizations like pharmaceutical companies. But scientists are just as self-interested as any other organism or organization. The personal goals of academic scientists are centered around career advancement and getting funding for research. For both, you need to make some scientific hypothesis and be "right" about it, not necessarily in the sense of having actual evidence quantitatively weighting the hypothesis, but in getting some large chunk of the scientific community to buy in. Achieving said buy-in is the core goal of academic scientists, and whether or not "consensus" is obtained through actual evidence isn't really relevant to the practitioners. They generally think that the consensus so obtained is itself evidence that they're right, but there's circular reasoning and confirmation bias written all over that. When you are evaluating the evidence put forth by a scientist, you not only must evaluate the quality of that evidence, but also the context in which it is presented, because the presenter undoubtedly (and probably unconsciously) re-weights things based on their own beliefs and goals. The scientist has a vested interest in being considered "right", which can be a lot different than actually being "right". The stronger those beliefs and goals relative to the actual evidence, the more likely you'll hear about "facts" and the "scientific method" as opposed to detailed evidence, both supportive and contradictory.

So when a scientist speaks, be sure to check the shoes.

Saturday, May 8, 2010

Mother Nature (and Monsanto): Thriving on The Law of Unintened Consequences

I loved this article: U.S. Farmers Cope with Roundup-Resistant Weeds. Here's an excerpt:

Roundup — originally made by Monsanto but now also sold by others under the generic name glyphosate — has been little short of a miracle chemical for farmers. It kills a broad spectrum of weeds, is easy and safe to work with, and breaks down quickly, reducing its environmental impact.

Sales took off in the late 1990s, after Monsanto created its brand of Roundup Ready crops that were genetically modified to tolerate the chemical, allowing farmers to spray their fields to kill the weeds while leaving the crop unharmed. Today, Roundup Ready crops account for about 90 percent of the soybeans and 70 percent of the corn and cotton grown in the United States.

But farmers sprayed so much Roundup that weeds quickly evolved to survive it. “What we’re talking about here is Darwinian evolution in fast-forward,” Mike Owen, a weed scientist at Iowa State University, said.

Now, Roundup-resistant weeds like horseweed and giant ragweed are forcing farmers to go back to more expensive techniques that they had long ago abandoned.


My first reaction on reading this was that Monsanto obviously screwed up. I mean, what idiot couldn't see this coming? But on second thought I'll bet they did see it coming. The later portion of the article discusses how Monsanto and other chemical companies are developing genetically-modified food plants (wheat, corn, soy) to be resistant to other herbicides as well (including one using a component of Agent Orange - mmmmm, Agent Orangey tofu). So of course, farmers will not have to buy additional pesticides, and probably pony up more cash for the next generation of resistant seeds. And you can see that going indefinitely, with the cash register ringing the whole time for Monsanto etc.

And to be clear: I don't think that companies like Monsanto are doing something evil. They're behaving exactly the way we ask them to in a (more or less) free market economy. They are taking a strategy that maximizes their value (or at least their assessment of it). That strategy may or may not have anything to do with maximizing your health or minimizing environmental impact. If there's any evil here, it's that of complacency on the part of the consumers, who (as a group) hold the ultimate power to change how corporations value their strategy. Corporations are notoriously short-sighted, as demonstrated by how readily many major financial institutions drove their respective buses off of a cliff recently. The start-up I used to work for developed a whole set off mathematical and software tools with the idea of allowing public corporations to value long-term strategy in the face of uncertainty. We spent some time studying how corporations actually make decisions vs. how they should given a way of optimizing value given whatever they knew (and knew they didn't know). The gap is typically quite large. Corporations, like people, are shortsighted, and much better at rationalizing why they did something after the fact than making a rational decision in the first place.

The good news is that corporate myopia gives consumers a fairly large lever. If you want corporations to "care" about your long-term health and well-being, be an informed consumer, and make your buying choices to reflect your own goals. It's the "informed" part that's important here.

I wonder how the course chosen by chemical/seed companies will play out. Maybe something like this:

  • Continued increase in spectrum of pesticides, resistance of weeds, and genetic engineering of food crops. At some point, the weeds are basically resistant to anything that won't outright kill humans.
  • Companies introduce a genetically modified bug to eat the weeds. New food crops are engineered to produce chemicals that repel the bugs. The insects eventually kill off most of the weeds, but evolve to be resistant to the food crop insect toxins, and start eating our food.
  • Cycle continues, introducing ever-more genetically engineered species introduced from higher in the food web.
  • Eventually the genetically-engineered humans are produced to act as workers to contain all of the new pest species. These "humans" are built to thrive on weeds, and as such prove to have considerably greater reproductive fitness of the old-school "natural" humans, whose fate as a species is basically sealed.

Friday, May 7, 2010

Why do you eat grains?

That question isn't is smart-assed as it sounds. Bear with me.

I've blabbed before as to how I've often asked nutrition experts "What's so healthy about 'healthy whole grains'?" I've never gotten an actual answer, and as far as I can tell the best one could say is "nothing in particular." And while I have discussed the possible ways that grain consumption could lead to disease, I would have to admit that the evidence that grains have some particular disease-causing properties (outside of those with obvious clinically-detectable problems, like celiac) seems more correlation than causation at this point.

So I've started rethinking this question more as "why does anybody eat anything?" Clearly the need, at some level, to seek out and consume food has to be innate. And animals evolve amazingly complex behaviors around food. I remember giving my dog an egg for the first time, shell and all. As he does with any food, I expected him to swallow it more or less whole, maybe with a couple of crunches for good measure. Instead, he gently picked it up from his bowl, put it on the ground, and ever-so-delicately cracked it open with his front teeth, then licked out the inside and left the shell. I'm pretty sure that wasn't a learned behavior, unless he's been climbing trees and getting into robins' nests behind my back.

But in general, and probably particularly for omnivores, directed behavior associated with food (like "go find some more of those sweet orange spherical thingies") is learned. Babies put everything in their mouths for a reason: they're figuring out which things are worth seeking out and sticking in their mouths again. You may want to check out this fascinating paper on the topic. The short version is this: there seem to be two main areas of the brain associated with taste. The primary taste cortex handles the innate sensing of taste: sweet, salt, bitter, sour, and umami, along with the texture and viscosity of food (to sense fat), temperature, capsaicin, etc. The response of the primary taste cortex is NOT attentuated by satiety. Something sweet tastes just as sweet whether you're hungry or full. But the primary taste cortex doesn't assign value to a particular taste, i.e. it does not decide whether something tastes "good" or "bad". That's the job of the secondary taste cortex. It is the secondary taste cortex that "decides" sweet things taste good when you're hungry, but no so much after eating a whole box of candy. Secondary taste cortex neurons learn what's good and what isn't, and are further tuned to specific foods. For instance, you can be fed to satiety with fat, and certain neurons will decrease their response to further fat. But the response of those same neurons to the taste of glucose does not decrease, regardless of whether or not you're full of butter. In other words, "there's always room for dessert".

Anyway, let me get to the punch-line from the closing paragraph:

The outputs of the orbitofrontal cortex reach brain regions such as the striatum, cingulate cortex, and dorsolateral prefrontal cortex where behavioural responses to food may be elicited because these structures produce behaviour which makes the orbitofrontal cortex reward neurons fire, as they represent a goal for behaviour. At the same time, outputs from the orbitofrontal cortex and amygdala, in part via the hypothalamus, may provide for appropriate autonomic and endocrine responses to food to be produced, including the release of hormones such as insulin.


In other words, the external response to food (behavior) is a learned response driven by the secondary taste cortex, while the internal response (e.g. hormonal) is innate, originating in the primary taste cortex. That means that you learn what things taste "good" by the secondary taste cortex integrating feedback (positive and negative) from the rest of the body (primary taste cortex, glucose sensors, etc.), reinforcing or weakening the association of that taste with the behavior that led to those stimuli. So the fact that you "like" potato chips is intimately tied up with the impulse to get off the coach at midnight and stumble into the kitchen to finish off the bag. And the only reason you "like" any food is because your brain learned to, associating the flavor with some feedback signals which it interprets as being associated with a net positive outcome.

One other point which is probably obvious, but important: the smaller the time between the flavor stimulus and relevant physiological response, the stronger the change in association with the behavior. Thus, getting cancer 10 years after eating a poisonous plant is not very helpful in weakening that behavior. It is certainly possible to crave something that produces a strong short-term reward, but has a net negative outcome. The brain (both consciously and unconsciously) is notably short-sighted in its assessment of value.

Which brings me back to the original question: why do people eat grains? And I don't mean that as implying there's some moral judgment to made - food morality is just another religion. And there's obviously a spectrum of answers depending on the temporal proximity of the act of eating to a specific endpoint. On end is "prepared properly, they taste good" (I like sourdough toast dripping in butter as much as the next guy, though I eat it rarely). On the other end is the evolutionary argument so brilliantly put forth by Kurt Harris, basically that the net effect of domesticating grains was an advantage in reproductive fitness over hunter-gatherers, regardless of the relative "health" of those doing the reproducing. Evolution cares about making babies, and doesn't care if you have bad teeth and a bum ticker, as long as you contribute genes to more babies than the guy still killing perfectly serviceable beasts of burden with a rock on a stick.

No, I'm interested in the middle area (logarithmically speaking), which is why we learned to like grains. And why do we like them so much that we're willing to go to some amount of trouble to eat them? Why do I so love sourdough toast and butter, even though it doinks my blood sugar and gives me acne?

(Maybe it's the butter - New Zealand makes REALLY good butter.)

I have nothing but vague guesses, and am hoping to get some interesting discussion in the comments.