Sunday, January 31, 2010

Vitamin D deficiency, seasonal depression, and diseases of civilization

George Hamilton admits that he has been addicted to sunbathing for much of his life. The photo below (from: phoenix.fanster.com), shows him at the age of about 70. In spite of possibly too much sun exposure, he looks young for his age, in remarkably good health, and free from skin cancer. How come? Maybe his secret is vitamin D.


Vitamin D is a fat-soluble pro-hormone; not actually a vitamin, technically speaking. That is, it is a substance that is a precursor to hormones, which are known as calcipherol hormones (calcidiol and calcitriols). The hormones synthesized by the human body from vitamin D have a number of functions. One of these functions is the regulation of calcium in the bloodstream via the parathyroid glands.

The biological design of humans suggests that we are meant to obtain most of our vitamin D from sunlight exposure. Vitamin D is produced from cholesterol as the skin is exposed to sunlight. This is one of the many reasons (see here for more) why cholesterol is very important for human health.

Seasonal depression is a sign of vitamin D deficiency. This often occurs during the winter, when sun exposure is significantly decreased, a phenomenon known as seasonal affective disorder (SAD). This alone is a cause of many other health problems, as depression (even if it is seasonal) may lead to obesity, injury due to accidents, and even suicide.

For most individuals, as little as 10 minutes of sunlight exposure generates many times the recommended daily value of vitamin D (400 IU), whereas a typical westernized diet yields about 100 IU. The recommended 400 IU (1 IU = 25 ng) is believed by many researchers to be too low, and levels of 1,000 IU or more to be advisable. The upper limit for optimal health seems to be around 10,000 IU. It is unlikely that this upper limit can be exceeded due to sunlight exposure, as noted below.

Cod liver oil is a good source of vitamin D, with one tablespoon providing approximately 1,360 IU. Certain oily fish species are also good sources; examples are herring, salmon and sardines. For optimal vitamin and mineral intake and absorption, it is a good idea to eat these fish whole. (See here for a post on eating sardines whole.)

Periodic sun exposure (e.g., every few days) has a similar effect to daily exposure, because vitamin D has a half-life of about 25 days. That is, without any use by the body, it would take approximately 25 days for vitamin D levels to fall to half of their maximum levels.

The body responds to vitamin D intake in a "battery-like" manner, fully replenishing the battery over a certain amount of time. This could be achieved by moderate (pre-sunburn) and regular sunlight exposure over a period of 1 to 2 months for most people. Like most fat-soluble vitamins, vitamin D is stored in fat tissue, and slowly used by the body.

Whenever sun exposure is limited or sunlight scarce for long periods of time, supplementation may be needed. Excessive supplementation of vitamin D (i.e., significantly more than 10,000 IU per day) can cause serious problems, as the relationship between vitamin D levels and health complications follows a U curve pattern. These problems can be acute or chronic. In other words, too little vitamin D is bad for our health, and too much is also bad.

The figure below (click on it to enlarge), from Tuohimaa et al. (2009), shows two mice. The one on the left has a genetic mutation that leads to high levels of vitamin D-derived hormones in the blood. Both mice have about the same age, 8 months, but the mutant mouse shows marked signs of premature aging.


It is important to note that the skin wrinkles of the mice on the left have nothing to do with sun exposure; they are associated with excessive vitamin D-derived hormone levels in the body (hypervitaminosis D) and related effects. They are a sign of accelerated aging.

Production of vitamin D and related hormones based on sunlight exposure is tightly regulated by various physiological and biochemical mechanisms. Because of that, it seems to be impossible for someone to develop hypervitaminosis D due to sunlight exposure. This does NOT seem to be the case with vitamin D supplementation, which can cause hypervitaminosis D.

In addition to winter depression, chronic vitamin D deficiency is associated with an increased risk of the following chronic diseases: osteoporosis, cancer, diabetes, autoimmune disorders, hypertension, and atherosclerosis.

The fact that these diseases are also known as the diseases of civilization should not be surprising to anyone. Industrialization has led to a significant decrease in sunlight exposure. In cold weather, our Paleolithic ancestors would probably seek sunlight. That would be one of their main sources of warmth. In fact, one does not have to go back that far in time (100 years should be enough) to find much higher average levels of sunlight exposure than today.

Modern humans, particularly in urban environments, have artificial heating, artificial lighting, and warm clothes. There is little or no incentive for them to try to increase their skin's sunlight exposure in cold weather.

References:

W. Hoogendijk, A. Beekman, D. Deeg, P. Lips, B. Penninx. Depression is associated with decreased 25-hydroxyvitamin-D and increased parathyroid hormone levels in old age. European Psychiatry, Volume 24, Supplement 1, 2009, Page S317.

P. Tuohimaa, T. Keisala, A. Minasyan, J. Cachat, A. Kalueff. Vitamin D, nervous system and aging. Psychoneuroendocrinology, Volume 34, Supplement 1, December 2009, Pages S278-S286.

Saturday, January 30, 2010

Cancer patterns in Inuit populations: 1950-1997

Some types of cancer have traditionally been higher among the Inuit than in other populations, at least according to data from the 1950s, when a certain degree of westernization had already occurred. The incidence of the following types of cancer among the Inuit has been particularly high: nasopharynx, salivary gland, and oesophageal.

The high incidence of these “traditional” types of cancer among the Inuit is hypothesized to have a strong genetic basis. Nevertheless some also believe these cancers to be associated with practices that were arguably not common among the ancestral Inuit, such as preservation of fish and meat with salt.

Genetic markers in the present Inuit population show a shared Asian heritage, which is consistent with the higher incidence of similar types of cancer among Asians, particularly those consuming large amounts of salt-preserved foods. (The Inuit are believed to originate from East Asia, having crossed the Bering Strait about 5,000 years ago.)

The incidence of nasopharynx, salivary gland, and oesophageal cancer has been relatively stable among the Inuit from the 1950s on. More modern lifestyle-related cancers, on the other hand, have increased dramatically. Examples are cancers of the lung, colon, rectum, and female breast.

The figure below (click on it to enlarge), from Friborg & Melbye (2008), shows the incidence of more traditional and modern lifestyle-related cancers among Inuit males (top) and females (bottom).


Two main lifestyle changes are associated with this significant increase in modern lifestyle-related cancers. One is increased consumption of tobacco. The other, you guessed it, is a shift to refined carbohydrates, from animal protein and fat, as the main source of energy.

Reference:

Friborg, J.T., & Melbye, M. (2008). Cancer patterns in Inuit populations. The Lancet Oncology, 9(9), 892-900.

How to break a coconut

The coconut is often presented as a healthy food choice, which it is, as long as you are not allergic to it. Coconut meat has a lot of saturated fat, which is very good for the vast majority of us.

(I posted about this issue elsewhere on this blog: my own experience and research suggest that saturated fat is very healthy for most people as long as it is NOT consumed together with refined carbs and sugars from industrialized food products.)

Coconut water is a good source of essential minerals, particularly magnesium and potassium. So is coconut meat, which is rich in iron, copper, manganese, and selenium. Coconut meat is also an good source of folate and an excellent source of dietary fiber.

If you are buying coconuts at a supermarket, I suggest choosing coconuts that have a lot of water in them. They seem to be the ones that taste the best. Just pick a coconut up and shake it. If it feels heavy and full of water, that’s the one.

First you need to make some holes on the coconut shell to extract the water. I recommend using a hammer and screwdriver. The screwdriver should be used only for this purpose, so you can keep it clean. Nails can be too thin. Place the coconut over a mitten or towel, and make holes on the dark spots (usually three) using the hammer and screwdriver.


Once you puncture the coconut, move the screwdriver a bit to enlarge each hole. Then place the coconut on a cup or thermos, with the holes pointing downwards, and let the water flow out of it. Normally I use a thermos, so that I can keep the coconut water fresh for later consumption.


As soon as all the coconut water is out, hold the coconut with a mitten in one hand, and strike it with the hammer with the other hand. The key here is to hold the coconut with your hand. You need to strike it hard. It is a good idea to do this inside or right above a kitchen sink so that the shell pieces fall into it.


Do not place the coconut against a hard surface (e.g., ceramic tiles), otherwise you can either break that surface or send pieces of the coconut flying all over the pace. Strike different areas of the coconut until it breaks into 5 to 7 pieces.

Finally, remove the meat of the coconut with a butter knife. The hand that holds the knife should be protected with a mitten, because you will have to apply pressure with it.


Store the coconut water in a sealed thermos, and the coconut meat pieces in a sealed container, both in the refrigerator, to preserve their freshness.

Coconut water and meat have a slightly sweet taste because of their sugar content, which is small and packed in with a lot of fiber. 100 g of coconut meat has about 15 g of carbs, of which 9 g is dietary fiber; that is, 100 g of coconut meat has only 6 g of net carbs.

Friday, January 29, 2010

Heavy Metal Biochemical Assessments

Mercury

Mercury’s recent presence in the body can be assessed with blood and urine samples because the initial half-life of blood mercury elimination is 3 days. The half-life of elimination for whole body mercury is between 60 and 90 days. Generally, the levels of mercury are below 10 mcg per liter in urine and below 40 mcg per liter in blood. Hair analysis can be useful as an estimate of long-term exposure to mercury.

To diagnose acute mercury toxicity, symptoms of respiratory distress are evaluated along with lab evaluation with a complete blood count and differential, serum electrolytes, glucose, liver and renal function tests, and urinalysis. Chest readiography and serial ABG measurements should be used for patients with severe inhalation exposure.

Reference: http://www.atsdr.cdc.gov/MHMI/mmg46.html

Lead

Blood lead levels can assess recent exposure to lead. It’s the primary screening method for lead exposure. It can also be measured with erythrocyte protoporphoryn, but this test is not sensitive enough to determine if children have levels below 25 mcg per deciliter. Because lead later travels to soft tissues and eventually to bones and teeth after several weeks, long-term exposure can be measured in bones and teeth with x-ray techniques.

Reference: http://www.atsdr.cdc.gov/toxprofiles/phs13.html

Cadmium

Cadmium in urine is best for determining level of recent and past exposure in the body. Analysis of hair and nails is not as useful because of factors of contamination from environment. Blood calcium can be useful to determine recent exposure in the body.

Reference: http://www.atsdr.cdc.gov/tfacts5.html

Wednesday, January 27, 2010

The low modern potassium-to-sodium ratio: Big problem or much ado about nothing?

It has been argued that the diets of our Paleolithic ancestors had on average a much higher potassium-to-sodium ratio than modern diets (see, e.g., Cordain, 2002).

This much lower modern ratio is believed by some to be the cause of a number of health problems, including: high blood pressure, stroke, heart disease, memory decline, osteoporosis, asthma, ulcers, stomach cancer, kidney stones, and cataracts.

But, is this really the case?

The potassium-to-sodium ratio in ancient and modern times

According to some estimates, our Paleolithic ancestors’ daily consumption was on average about 11,000 mg of potassium and about 700 mg of sodium (salt). That yields a potassium-to-sodium ratio of about 16. Today’s ratio in industrialized countries is estimated to be around 0.6.

Just for the sake of illustration, let us compare a healthy Paleolithic diet food, walnuts, with a modern industrialized food that many believe to be quite healthy, whole-wheat bread. The table below (click on it to enlarge) compares these two foods in terms of protein, carbohydrate, fat, vitamin, and mineral content.


Walnuts have a potassium-to-sodium ratio of about 205. The whole-wheat bread’s ratio is about 0.5; much lower, and close to the overall ratio estimated for industrialized countries mentioned above.

At the same time, walnuts provide a better nutritional value than whole-wheat bread, including a good amount of omega-3 fatty acids (2.5 g; of α-linolenic acid, or ALA). However, walnuts have a fairly high omega-6 fat content.

Also, many diabetics experience elevated blood glucose levels in response to whole-wheat bread, in spite of its glycemic index being supposedly lower than that of white bread. Walnuts do not seem to cause this type of problem, even though several people are allergic to walnuts (and other tree nuts).

Health effects of the potassium-to-sodium ratio

So, the potassium-to-sodium ratio appears to have been much higher among our Paleolithic ancestors than today. It is important to stress that, even though this is a possibility, we do not know this for sure. Animals go to great lengths to find salt licks, and then consume plenty of sodium in them. Our ancestors could have done that too. Also, we know that sodium deficiency can be deadly to both animals and humans.

As for the many negative health effects of a low potassium-to-sodium ratio in modern humans, we have reasons to be somewhat skeptical. One has to wonder if the studies that are out there do not conflate the effects of this ratio with those of other factors, such as smoking, heavy alcohol consumption, or consumption of industrialized high carb foods (e.g., cereals, pasta, refined sugars).

Another possible confounding factor is potassium deficiency, not the potassium-to-sodium ratio. Potassium deficiency, like other deficiencies of essential minerals, including sodium deficiency, is associated with serious health problems.

If potassium is deficient in one’s diet, it is also likely that the potassium-to-sodium ratio will be low, unless the diet is also equally deficient in sodium.

Let us take a look at a study by Ikeda et al. (1986), which included data from 49 regions in Japan, a country known for high consumption of sodium.

This study found a significant association between the potassium-to-sodium ratio and overall mortality and heart disease, but only among men, and not among women.

One wonders, based on this, whether another uncontrolled factor, or factors, might have biased the results. Examples are smoking and heavy alcohol consumption, which could have been higher among men than women. Another is chronic stress, which could also have been higher among men than women.

The researchers report that they found no association between the potassium-to-sodium ratio and mortality due to diabetes, liver disease, or tuberculosis. This ameliorates the problem somewhat, but does not rule out the biasing effect of other factors.

It would have been better if the researchers had controlled for the combined effect of covariates (such as smoking, alcohol consumption etc.) in their analysis; which they did not.

Moreover, the study found no association between the potassium-to-sodium ratio and blood pressure. This is a red flag, because many of the diseases said to be caused by a low potassium-to-sodium ratio are assumed to be mediated by or at least associated with high blood pressure.

Regarding the possible confounding effect of industrialized high carb foods consumption, it seems that many of these foods have a low potassium-to-sodium ratio, as the example of whole-wheat bread above shows. Thus, some of the health problems assigned to the low potassium-to-sodium ratio may have actually been caused by heavy consumption of industrialized high carb foods.

It is also possible that the problem is with the combination of a low potassium-to-sodium ratio and industrialized high carb foods consumption.

At the time the study was conducted, Japan was somewhat westernized, which is why industrialized high carb foods consumption might have been a factor. The US strongly influenced the Japanese after World War II, as it helped rebuild Japan’s economy.

In conclusion, the jury is still out there regarding whether the low modern potassium-to-sodium ratio is a big problem or much ado about nothing.

References:

Cordain, L. (2002). The Paleo Diet: Lose weight and get healthy by eating the food you were designed to eat. New York, NY: Wiley.

Ikeda, M., Kasahara, M., Koizumi, A., and Watanabe, T. (1986). Correlation of cerebrovascular disease standardized mortality ratios with dietary sodium and the sodium/potassium ratio among the Japanese population. Preventive Medicine, 15(1), 46-59.

Saturday, January 23, 2010

Eating fish whole: Sardines

Different parts of a fish have different types of nutrients that are important for our health; this includes bones and organs. Therefore it makes sense to consume the fish whole, not just filets made from it. This is easier to do with small than big fish.

Small fish have the added advantage that they have very low concentrations of metals, compared to large fish. The reason for this is that small fish are usually low in the food chain, typically feeding mostly on plankton, especially algae. Large carnivorous fish tend to accumulate metals in their body, and their consumption over time may lead to the accumulation of toxic levels of metals in our bodies.

One of my favorite types of small fish is the sardine. The photo below is of a dish of sardines and vegetables that I prepared recently. Another small fish favorite is the smelt (see this post). I buy wild-caught sardines regularly at the supermarket.


Sardines are very affordable, and typically available throughout the year. In fact, sardines usually sell for the lowest price among all fish in my supermarket; lower even than tilapia and catfish. I generally avoid tilapia and catfish because they are often farmed (tilapia, almost always), and have a poor omega-6 to omega-3 ratio. Sardines are rich in omega-3, which they obtain from algae. They have approximately 14 times more omega-3 than omega-6 fatty acids. This is an excellent ratio, enough to make up for the poorer ratio of some other foods consumed on a day.

This link gives a nutritional breakdown of canned sardines; possibly wild, since they are listed as Pacific sardines. (Fish listed as Atlantic are often farm-raised.) The wild sardines that I buy and eat probably have a higher vitamin and mineral content that the ones the link refers to, including higher calcium content, because they are not canned or processed in any way. Two sardines should amount to a little more than 100 g; of which about 1.6 g will be the omega-3 content. This is a pretty good amount of omega-3, second only to a few other fish, like wild-caught salmon.

Below is a simple recipe. I used it to prepare the sardines shown on the photo above.

- Steam cook the sardines for 1 hour.
- Spread the steam cooked sardines on a sheet pan covered with aluminum foil; use light olive oil to prevent the sardines from sticking to the foil.
- Preheat the oven to 350 degrees Fahrenheit.
- Season the steam cooked sardines to taste; I suggest using a small amount of salt, and some chili powder, garlic powder, cayenne pepper, and herbs.
- Bake the sardines for 30 minutes, turn the oven off, and leave them there for 1 hour.

The veggies on the plate are a mix of the following: sweet potato, carrot, celery, zucchini, asparagus, cabbage, and onion. I usually add spinach but I had none around today. They were cooked in a covered frying pan, with olive oil and a little bit of water, in low heat. The cabbage and onion pieces were added to the mix last, so that in the end they had the same consistency as the other veggies.

I do not clean, or gut, my sardines. Normally I just wash them in water, as they come from the supermarket, and immediately start cooking them. Also, I eat them whole, including the head and tail. Since they feed primarily on plant matter, and have a very small digestive tract, there is not much to be “cleaned” off of them anyway. In this sense, they are like smelts and other small fish.

For about a year now I have been eating them like that; and so have my family (wife and 4 kids), of their own volition. Other than some initial ew’s, nobody has ever had even a hint of a digestive problem as a result of eating the sardines like I do. Maybe the Kock family members share a common crocodile-like digestive system, but I think most people will do fine following the same approach. This is very likely the way most of our hominid ancestors ate small fish.

If you prepare the sardines as above, they will be ready to store, or eat somewhat cold. There are several variations of this recipe. For example, you can bake the sardines for 40 minutes, and then serve them hot.

You can also add the stored sardines later to a soup, lightly steam them in a frying pan (with a small amount of water), or sauté them for a meal. For the latter I would recommend using coconut oil and low heat. Butter can also be used, which will give the sardines a slightly different taste.

Friday, January 22, 2010

What's wrong with hair zinc analysis?

Hair used for nutritional status of a mineral can be flawed because of exogenous contamination--from water, dust, cosmetics, shampoos, etc--and because of endogenous, nonnutritional factors such as hair growth rate, color, sex, pregnancy and age.

However, I do find it quite interesting that hair analysis could indicate a history of nutrition. Historical measurements would be otherwise difficult to get, but hair grows lsowly and so hair can reflect levels of zinc and other elements over time. Plus, it's an easy test since hair is easy to get.

Better non-invasive indicators of zinc deficiency are Bryce-Smith taste and sweat analysis. Loss of taste is one of the first symptoms of a deficiency because zinc is needed for an enzyme, gustin, present in saliva that modulates sense of taste. And sweat analysis may be even more sensitive as an index than blood biomarkers.

NSI Determine Checklists - Grandma and me


Grandma

My grandma, 79, scored a 6 on the NSI Determine Checklist, which puts her at “high nutritional risk.” Her eating habits are affected by GERD and she tries to avoid any processed foods high in sodium because of hypertension. She also eats alone most of the time and eats fewer than two meals per day. Although she dislikes eating fruits and vegetables, she does manage to obtain some of them in her diet. She drinks plenty of milk and uses dairy products liberally. She doesn’t drink alcohol, has enough money for food she needs (although she said she could use more), and only takes one prescription medication. She has not gained or lost 10 pounds without wanting to in the last six months. She shops and cooks for herself and reports that she also picks at food throughout the day.

Me

I, 31, scored a 0 on the NSI Determine Checklist. I have no conditions that affect my diet, I eat balanced meals along with vegetables, fruits and milk products, and don’t drink more than one glass of wine daily. I have no mouth problems, have money to buy food, eat with others most of the time, don’t take any prescriptions, have maintained the same weight for years, and often shop and cook for myself.

Thoughts

Although there is a stark contrast between my nutritional risk and that of my grandmother’s, it doesn’t escape me that in 48 years I could be in the same situation as she is now. I realize that when I eat too much I too am susceptible to GERD symptoms such as reflux and heartburn. This may affect my nutritional risk in the future unless I am conscientious enough to make change in my diet to reduce inflammation in my esophogaus. As for my grandma, her high nutritional risk concerns me greatly because at her age, she should be more focused on nutrition than I am. We will need to change that.

Applied evolutionary thinking: Darwin meets Washington

Charles Darwin, perhaps one of the greatest scholars of all time, thought about his theory of mutation, inheritance, and selection of biological traits for more than 20 years, and finally published it as a book in 1859.  At that time, many animal breeders must have said something like this: “So what? We knew this already.”

In fact George Washington, who died in 1799 (many years before Darwin’s famous book came out), had tried his hand at what today would be called “genetic engineering.” He produced at least a few notable breeds of domestic animals through selective breeding. Those include a breed of giant mules – the “Mammoth Jackstock” breed. Those mules are so big and strong that they were used to pull large boats filled with coal along artificial canals in Pennsylvania.

Washington learned the basic principles of animal breeding from others, who learned it from others, and so on. Animal breeding has a long tradition.

So, not only did animal breeders, like George Washington, had known about the principles of mutation, inheritance, and selection of biological traits; but they also had been putting that knowledge into practice for quite some time before Darwin’s famous book “The Origin of Species” was published.

Yet, Darwin’s theory has applications that extend well beyond animal breeding. There are thousands of phenomena that would look very “mysterious” today without Darwin’s theory. Many of those phenomena apply to nutrition and lifestyle, as we have been seeing lately with the paleo diet movement. Among the most amazing and counterintuitive are those in connection with the design of our brain.

Recent research, for instance, suggests that “surprise” improves cognition. Let me illustrate this with a simple example. If you were studying a subject online that required memorization of key pieces of information (say, historical facts) and a surprise stimulus was “thrown” at you (say, a video clip of an attacking rattlesnake was shown on the screen), you would remember the key pieces of information (about historical facts) much better than if the surprise stimulus was not present!

The underlying Darwinian reason for this phenomenon is that it is adaptively advantageous for our brain to enhance our memory in dangerous situations (e.g., an attack by a poisonous snake), because that would help us avoid those situations in the future (Kock et al., 2008; references listed at the end of this post). Related mental mechanisms increased our ancestors’ chances of survival over many generations, and became embedded in our brain’s design.

Animal breeders knew that they could apply selection, via selective breeding, to any population of animals, and thus make certain traits evolve in a matter of a few dozen generations or less. This is known as artificial selection. Among those traits were metabolic traits. For example, a population of lambs may be bred to grow fatter on the same amount of food as leaner breeds.

Forced natural selection may have been imposed on some of our ancestors, as I argue in this post, leading metabolic traits to evolve in as little as 396 years, or even less, depending on the circumstances.

In a sense, forced selection would be a bit like artificial selection. If a group of our ancestors became geographically isolated from others, in an environment where only certain types of food were available, physiological and metabolic adaptations to those types of food might evolve. This is also true for the adoption of cultural practices; culture can also strongly influence evolution (see, e.g., McElreath & Boyd, 2007).

This is why it is arguably a good idea for people to look at their background (i.e., learn about their ancestors), because they may have inherited genes that predispose them to function better with certain types of diets and lifestyles. That can help them better tailor their diets to their genetic makeup, and also understand why certain diets work for some people but not for others. (This is essentially what medical doctors do, on a smaller time scale, when they take a patients' parents health history into consideration when dispensing medical advice.)

By ancestors I am not talking about Homo erectus here, but ancestors that lived 3,000; 1,000; or even 500 years ago. At times when medical care and other modern amenities were not available, and thus selection pressures were stronger. For example, if your no-so-distant ancestors have consumed plenty of dairy, chances are you are better adapted to consume dairy than people whose ancestors have not.

Very recent food inventions, like refined carbohydrates, refined sugars, and hydrogenated fats are too new to have influenced the genetic makeup of anybody living today. So, chances are, they are bad for the vast majority of us. (A small percentage of the population may not develop any hint of diseases of civilization after consuming them for years, but they are not going to be as healthy as they could be.) Other, not so recent, food inventions, such as olive oil, certain types of bread, certain types of dairy, may be better for some people than for others.

References:

Kock, N., Chatelain-Jardón, R., & Carmona, J. (2008). An experimental study of simulated web-based threats and their impact on knowledge communication effectiveness. IEEE Transactions on Professional Communication, 51(2), 183-197.

McElreath, R., & Boyd, R. (2007). Mathematical models of social evolution: A guide for the perplexed. Chicago, IL: The University of Chicago Press.

Wednesday, January 20, 2010

Who is really behind these posts?

Acknowledgement: In addition to the references provided at the end of several posts, I would like to acknowledge that I also regularly consult with the most interesting man in the world, especially in connection with complex scientific matters. (YouTube link below, if you must know the identity of this incredibly modest and low-key person.)

http://www.youtube.com/watch?v=PVwG1t-NVAA

No need to refer to him as The Most Interesting Man in the World (i.e., capitalized), because, as he notes: "There is only one most interesting man in the world."

How long does it take for a food-related trait to evolve?

Often in discussions about Paleolithic nutrition, and books on the subject, we see speculations about how long it would take for a population to adapt to a particular type of food. Many speculations are way off mark; some think that even 10,000 years are not enough for evolution to take place.

This post addresses the question: How long does it take for a food-related trait to evolve?

We need a bit a Genetics 101 first, discussed below. For more details see, e.g., Hartl & Clark, 2007; and one of my favorites: Maynard Smith, 1998. Full references are provided at the end of this post.

New gene-induced traits, including traits that affect nutrition, appear in populations through a deceptively simple process. A new genetic mutation appears in the population, usually in one single individual, and one of two things happens: (a) the genetic mutation disappears from the population; or (b) the genetic mutation spreads in the population. Evolution is a term that is generally used to refer to a gene-induced trait spreading in a population.

Traits can evolve via two main processes. One is genetic drift, where neutral traits evolve by chance. This process dominates in very small populations (e.g., 50 individuals). The other is selection, where fitness-enhancing traits evolve by increasing the reproductive success of the individuals that possess them. Fitness, in this context, is measured as the number of surviving offspring (or grand-offspring) of an individual.

Yes, traits can evolve by chance, and often do so in small populations.

Say a group of 20 human ancestors became isolated for some reason; e.g., traveled to an island and got stranded there. Let us assume that the group had the common sense of including at least a few women in it; ideally more than men, because women are really the reproductive bottleneck of any population.

In a new generation one individual develops a sweet tooth, which is a neutral mutation because the island has no supermarket. Or, what would be more likely, one of the 20 individuals already had that mutation prior to reaching the island. (Genetic variability is usually high among any group of unrelated individuals, so divergent neutral mutations are usually present.)

By chance alone, that new trait may spread to the whole (larger now) population in 80 generations, or around 1,600 years; assuming a new generation emerging every 20 years. That whole population then grows even further, and gets somewhat mixed up with other groups in a larger population (they find a way out of the island). The descendants of the original island population all have a sweet tooth. That leads to increased diabetes among them, compared with other groups. They find out that the problem is genetic, and wonder how evolution could have made them like that.

The panel below shows the formulas for the calculation of the amount of time it takes for a trait to evolve to fixation in a population. It is taken from a set of slides I used in a presentation (PowerPoint file here). To evolve to fixation means to spread to all individuals in the population. The results of some simulations are also shown. For example, a trait that provides a minute selective advantage of 1% in a population of 10,000 individuals will possibly evolve to fixation in 1,981 generations, or 39,614 years. Not the millions of years often mentioned in discussions about evolution.


I say “possibly” above because traits can also disappear from a population by chance, and often do so at the early stages of evolution, even if they increase the reproductive success of the individuals that possess them. For example, a new beneficial metabolic mutation appears, but its host fatally falls off a cliff by accident, contracts an unrelated disease and dies etc., before leaving any descendant.

How come the fossil record suggests that evolution usually takes millions of years? The reason is that it usually takes a long time for new fitness-enhancing traits to appear in a population. Most genetic mutations are either neutral or detrimental, in terms of reproductive success. It also takes time for the right circumstances to come into place for genetic drift to happen – e.g., massive extinctions, leaving a few surviving members. Once the right elements are in place, evolution can happen fast.

So, what is the implication for traits that affect nutrition? Or, more specifically, can a population that starts consuming a particular type of food evolve to become adapted to it in a short period of time?

The answer is yes. And that adaptation can take a very short amount of time to happen, relatively speaking.

Let us assume that all members of an isolated population start on a particular diet, which is not the optimal diet for them. The exception is one single lucky individual that has a special genetic mutation, and for whom the diet is either optimal or quasi-optimal. Let us also assume that the mutation leads the individual and his or her descendants to have, on average, twice as many surviving children as other unrelated individuals. That translates into a selective advantage (s) of 100%. Finally, let us conservatively assume that the population is relatively large, with 10,000 individuals.

In this case, the mutation will spread to the entire population in approximately 396 years.

Descendants of individuals in that population (e.g., descendants of the Yanomamö) may posses the trait, even after some fair mixing with descendants of other populations, because a trait that goes into fixation has a good chance of being associated with dominant alleles. (Alleles are the different variants of the same gene.)

This Excel spreadsheet (link to a .xls file) is for those who want to play a bit with numbers, using the formulas above, and perhaps speculate about what they could have inherited from their not so distant ancestors. Download the file, and open it with Excel or a compatible spreadsheet system. The formulas are already there; change only the cells highlighted in yellow.

References:

Hartl, D.L., & Clark, A.G. (2007). Principles of population genetics. Sunderland, MA: Sinauer Associates.

Maynard Smith, J. (1998). Evolutionary genetics. New York, NY: Oxford University Press.

Go see your doctor, often

As I blog about health issues, and talk with people about them, I often notice that there is a growing contempt for the medical profession.

This comes in part from the fact that many MDs are still providing advice based on the mainstream assumption that saturated fat is the enemy. Much recent (and even some old) research suggests that among the main real enemies of good health are: chronic stress, refined carbs, refined sugars, industrial trans-fats, and an omega-6/omega-3 imbalance caused by consumption of industrial vegetable oils rich in omega-6 fats.

Because of this disconnect, some people stop seeing their doctors regularly; others avoid doctors completely. Many rely exclusively on Internet advice, from health-related blogs (like this) and other sources. In my opinion, this is a BIG mistake.

A good MD has something that no blogger who is not an MD (like me) can have. He or she has direct access to a much larger group of people, and to confidential information that can clarify things that would look mysterious to non-MDs. They cannot share that information with others, but they know.

For example, often I hear from people that they did this and that, in terms of diet a lifestyle, and that their lab tests were such and such. Later I find out that what they told me was partially, or completely, wrong. That is, they distorted the truth, maybe subconsciously.

I have never met an MD who completely ignored hard facts, such as results of lab tests and common health-related measurements. I have never met an MD who tried to force me to do anything either; although I have to admit that some tend to be a bit pushy.

I see a doctor who does not agree with me; e.g., he wanted me to take statins. No problem; that is the way I like it. If my doctor will agree 100% with all I say, do I need to see that doctor?

My doctor does not question lab results though, and maybe I am changing a bit the way he thinks. He wanted me to take statins, but once I told him that I wanted to try a few other things first, he said: no problem. When the results came, he had that look on this face - maybe u wuz royt eh!?

Many, many patients are under the mistaken assumption that they need to please their doctors. A subconscious assumption for most, no doubt. I guess this is part of human nature, but I don’t think it is helpful to doctors or patients.

Patients actually need to work together with their doctors, see them often, do their own research, ask questions, and do those things that lead to health improvements – ideally measurable ones.

Monday, January 18, 2010

The evolution of costly traits: A challenge to a strict paleo diet orientation

The fundamental principle of the paleo diet movement is that we should model our diet on the diet of our ancestors. In other words, for optimal health, our diet should be as close to the diet of our ancestors as possible. Following this principle generally makes sense, but there are a number of problems with trying to follow it too strictly.

Some of those problems will have to wait for other posts. Examples are: our limited knowledge about what our ancestors really ate (some say: lean meat; others say: fatty meat); the fact that evolution can happen fast under certain circumstances (a few thousand years, not millions of years, thus recent and divergent adaptations are a possibility); the fact that among our ancestors some, like Homo erectus, were big meat eaters, but others, like Australopithecus afarensis, were vegetarians … Just to name a few problems.

The focus of this post is on traits that evolved in spite of being survival handicaps. These counterintuitive traits are often called costly traits, or Zahavian traits (in animal signalling contexts), in honor of the evolutionary biologist Amotz Zahavi (Zahavi & Zahavi, 1997). The implication for dieting is that our ancestors might have evolved some eating habits that are bad for human survival, and moved away from others that are good for survival. And I am not only talking about survival among modern humans; I am talking about survival among our human ancestors too.

Here is the most interesting aspect of these types of traits. Our ancestors may have acquired them through genetic mutation and selection (as opposed to genetic drift, which may lead some traits to evolve by chance). That is, they emerged not in spite, but because of evolutionary pressures.

The simple reason is that evolution maximizes reproductive success, not survival. If that were not the case, mice species, as well as other species that specialize in fast reproduction within relatively short lifespans, would never have evolved.

In fact, excessive longevity is akin to quasi-cloning through asexual reproduction, from an evolutionary perspective. It is bad because species need genetic diversity to exist in a constantly changing environment, and genetic diversity is significantly increased by sexual reproduction; the more, the better. Without plenty of death to match that, overpopulation would ensue.

Death is one of evolution’s main allies.

Genes code for the expression of phenotypic traits, such as behavioral (e.g., aggressiveness) and morphological (e.g., opposing thumbs) traits. Costly traits are phenotypic traits that evolved in spite of imposing a fitness cost, often in the form of a survival handicap.

In non-human animals, the classic example of costly trait is the peacock’s train, used by males to signal good health to females. This trait is usually referred to, wrongly, as the male peacock’s tail. Both males and females have tails, but only the males have the large trains, which are actually tail appendages.

What about humans?

One example is the evolution of testosterone markers in human males. Testosterone markers (facial masculinity) have been hypothesized to be handicaps evolved in part by human males to signal to females that they are healthy, essentially because testosterone suppresses the immune system. This apparently bizarre idea is known as the immunocompetence-handicap hypothesis (Rhodes et al., 2003).

This idea will sound bizarre to some, because of the notion that testosterone helps build muscle mass (which it does, together with other hormones, such as insulin), and arguably muscle mass helped our ancestors hunt and fight off predators. Yet, consider the following questions: If muscularity was so useful for hunting and fighting, why are humans so weak compared with other animals of similar size? Why are not females as muscular as males? Why is it so hard to gain muscle mass, compared to fat mass?

Another example is the evolution of oral speech in humans. The evolution of oral speech is one of the most important landmarks in the evolution of the human species, having happened relatively recently in our evolutionary history. However, the new larynx design required for oral speech also significantly increased our ancestors’ chances of death by choking during ingestion of food and liquids, and of suffering from various aerodigestive tract diseases such as gastroesophageal reflux, among other survival-related problems.

Yet, oral speech evolved because it enhanced overall reproductive success, in part by enabling knowledge communication (Kock, 2009), and also due to sexual selection (Miller, 2000). As Miller put it in his book The Mating Mind, ancestral women could gauge a man’s overall health by his ability to speak intelligently, in addition to other traits, such as testosterone markers.

Most of the sexual selection pressure during human evolution was placed by females on males, not the other way around. Ancestral women were more selective than men about who they had sex with; so are modern women, Sex and the City notwithstanding.

Now let us look at the connection with strict paleo dieting.

Paleo man may have consumed certain types of food to help with his testosterone handicap, increasing his reproductive success. As far as evolution is concerned, this is fine – the genes are selfish, and could not care less about the host (Burt & Trivers, 2006; Dawkins, 1990). The guy will mate, but will not live as long as he would like, past reproductive age. Given this possibility, does eating exactly like paleo man make sense for a 50 year old married male today? That is where too much of a focus on a paleo diet may be a problem.

Of course "paleo man" is really a metaphor. There was no "one" paleo man. There are at least three hominid species in the Paleolithic period that differed significantly from each other: Homo sapiens, Homo erectus, and Homo habilis. If you go back in time a little further, we encounter other hominid species, such Australopithecus afarensis and Australopithecus africanus, who were mostly, if not strictly, vegetarians.

Evolution is very useful as a unifying principle to help us understand what is healthy today and what is not. But it cannot completely replace empirical research on nutrition. Some of that research will undoubtedly uncover nutrition habits that increase longevity and improve health today, even though they were not practiced by our paleo ancestors.

We know that highly refined carbs (e.g., white bread with no fiber) and sugars (e.g., table sugar) are too recent an addition to the human diet for us to have evolved to use them optimally for nutrition. So their association with the metabolic syndrome makes sense, from an evolutionary perspective. But there are very gray areas where paleo nutrition speculations cannot tell us much, and what they tell us may be misleading.

References:

Burt, A. & Trivers, R. (2006). Genes in conflict: The biology of selfish genetic elements. Cambridge, MA: Harvard University Press.

Dawkins, R. (1990). The selfish gene. Oxford, UK: Oxford University Press.

Kock, N. (2009). The evolution of costly traits through selection and the importance of oral speech in e-collaboration. Electronic Markets, 19(4), 221-232.

Miller, G.F. (2000). The mating mind: How sexual choice shaped the evolution of human nature. New York, NY: Doubleday.

Rhodes, G., Chan, J., Zebrowitz, L.A., & Simmons, L.W. (2003). Does sexual dimorphism in human faces signal health? Proceedings of the Royal Society of London: Biology Letters, 270(S1), S93-S95.

Zahavi, A. & Zahavi, A. (1997). The Handicap Principle: A missing piece of Darwin’s puzzle. Oxford, England: Oxford University Press.

Sunday, January 17, 2010

Ischemic heart disease among Greenland Inuit: Data from 1962 to 1964

The traditional Inuit diet is very high in animal protein and fat. It also includes plant matter. Typically it is made up primarily of the following: fish, walrus, seal, whale, berries, and fireweed (of which syrups and jellies can be made).

Kjærgaard and colleagues (see under References, at the end of this post) examined data from an Inuit population in Greenland from 1962 to 1964, prior to the heavy westernization of their diet that is seen today. They investigated 96.9% of the whole population in three areas, including Ammassalik in East Greenland (n = 1,851).

Of those, only 181 adults, or 9.7 percent, had anything that looked like an abnormality that could suggest ischemia. This included ventricular hypertrophy (an enlargement of the heart chambers), leading to an overestimation because benign ventricular hypertrophy is induced by continuous physical exertion. These 181 adults were then selected for further screening.

Benign ventricular hypertrophy is also known as athlete's heart, because it is common among athletes. A prevalence of ventricular hypertrophy at a relatively young age, and declining with age, would suggest benign hypertrophy. The opposite would suggest pathological hypertrophy, which is normally induced by chronic hypertension.

As you can see from the figure below, from Kjærgaard et al. (2009), the pattern observed among the Inuit was of benign hypertrophy, suggestive of strong physical exertion at a young age.


A pattern of benign hypertrophy induced by robust physical activity is also consistent with reports by Stefansson (1958) about the life of the Eskimos in Northern Alaska. It is reasonable to assume that these Eskimos had a diet and lifestyle similar to the Greenland Inuit.

Back to Kjærgaard et al.’s (2009) study. The 181 adults selected for further screening then had a 12-lead ECG performed (this is a widely used test to check for heart abnormalities). The results suggested that only two men, aged 62 and 63 years, had ischemic heart disease. All in all, this suggests a prevalence of ischemic heart disease of 0.11 percent, which is very low.

(The authors of the article estimated the prevalence of ischemic heart disease at 1.1 percent, because they used the n = 181, as opposed to the original n = 1,851, in their calculation. The latter is the correct baseline sample size, in my opinion. Still, the authors present the 1.1 percent number as quite low as well, which it is.)

Recent statistics (at the time of this post's writing) suggest a prevalence of ischemic heart disease in the US of 6.8 percent. That is, the prevalence in the US is 63 times higher than among the Inuit studied (using the 0.11 percent as the basis for comparison). And, it should be noted that there are many countries with a higher prevalence of ischemic heart disease than modern US.

It is possible that the low prevalence of ischemic heart disease among the Inuit was partly due to a higher mortality of those with the disease than in modern US, where medical intervention can prolong one's life in the presence of almost any disease. That is, perhaps many of those Inuit with ischemia would die quickly, and thus would not be captured by a study like this.

It is doubtful, however, that this would explain a difference as large as the one observed. Moreover, if many Inuit were dying due to ischemia, there would probably be plenty of evidence suggesting that. (I would imagine that the mysterious deaths associated with chest pain, and other related symptoms, would be a constant topic of conversation.) Reports from early explorers, however, suggest the opposite (e.g., Stefansson, 1958), and are consistent with the study described here.

In conclusion, this study suggests that the diet and lifestyle of the Greenland Inuit prior to the 1960’s (i.e., not their traditional diet and lifestyle, but approaching it) could be seen today as heart-healthy (at least for them), even though the Greenland Inuit ate a lot of animal protein and fat.

References:

Kjærgaard, M., Andersen, S., Holten, M., Mulvad, G., Kjærgaard, J.J. (2009). Low occurrence of ischemic heart disease among Inuit around 1963 suggested from ECG among 1851 East Greenland Inuit. Atherosclerosis, 203(2), 599-603.

Stefansson, V. (1958). Eskimo longevity in Northern Alaska. Science, 127(3288), 16-19.

Saturday, January 9, 2010

Use of Organic Acids as Detoxification Markers

Environmental toxins, or xenobiotics, are foreign chemicals that enter our bodies and can potentially cause harm to our organs, tissues and cells. There are more than 60,000 known everyday chemicals that we are exposed to of which at least 200 are found in newborns at moment of birth. The most prevalent pollutants nowadays are phthalates and plasticizers, of which have been determined to be endocrine disruptors, and have been linked to thyroid diseases and various health conditions such as insulin resistance, metabolic syndrome, obesity, osteoporosis and arteriosclerosis. Other toxins are implicated in depleting folic acid leading to digestive disorders such as colitis or are known carcinogens.

Organic acids, of which are compounds used in metabolism, can be measured to assess how the body responds to toxins in the body or to evaluate nutrients related to processes of detoxification. For example, methylation is a vital step in the facilitation of converting homocysteine to methionine and in detoxifying chemicals. Without B12, methylation would be suppressed; thus, a resulting methylmalonic acid could be measured in urine at this point. If folic acid deficiency results, then the organic acid formiminoglutamate will accumulate and can be measured. A second example of an organic acid that can be used to evaluate nutrient deficiency resulting from toxins is xanthurenic acid. This acid appears in urine when chemicals deplete B6 (pyridoxine). A third example is measurement of fatty acids. When pthalates interfere with carnitine synthesis, then beta-oxidation in the mitochondria is impaired. THis, in turn, can result in elevated adipate, suberate, and ethylmalonate.

As markers of impaired detoxification or nutrient deficiency resulting from toxins, organic acids can help the clinical practitioner determine nutritional needs as well as possible nutrient or bioactive therapies. These therapies may include supplementation with B12, folic acid, n-acetyl cysteine, glutathione, CoQ10 and glycine. By correcting deficiency or otherwise, these nutrients potentially restore or boost detoxification in efforts to improve health of patients.

Summarized from

Rogers SA. Using Organic Acids to Diagnose and Manage Recalcitrant Patients. Alternative Therapies; July/Aug;12,4, 2006. Available at: http://blackboard.bridgeport.edu/@@437EB59FF6DF953742043192DBAC3894/courses/1/NUTR-560E-DLB-2009NF/content/_22128_1/OrganicsAcidCME.pdf

Okinawa: The island of pork

The original inhabitants of the Ryūkyū Islands, of which the island of Okinawa is the largest, are believed to have the highest life expectancy in the world.

One of the staples of their diet is sweet potatoes. The carbohydrate percentage of a sweet potato is about 20; that is, each 100 g of sweet potato mass has about 20 g of carbohydrates. Sweet potatoes have a medium-high glycemic index, and are often avoided by those with impaired insulin sensitivity, and certainly by diabetics.

The other main staple of their diet is pork, as you may have inferred from the title of this post. The quote below is from the first of the three links provided below the quote.
Pork appears so frequently in the Okinawan diet that to say "meat" is really to say "pork." [...] It is no exaggeration to say that the present-day Okinawan diet begins and ends with pork.

So, what is the secret of the Okinawans’ longevity? Maybe it is the diet. Maybe it is the lifestyle. Maybe it is the fact that their mothers and fathers are Okinawans (the heritability of longevity has been estimated to be about 33%, and to be higher among females than males). Here are some interesting points that are worth noting:

- Their diet is not only of meat, but includes plenty of it.

- Their diet is not particularly low in saturated fat, and maybe it is high in it.

- Their diet is not particularly low in dietary cholesterol, and maybe high in it, since they eat the pig whole, including the parts (e.g., organs) rich in dietary cholesterol.

- Their diet is not a no carb diet, not even a typical low carb diet, but it seems to be very low in refined carbs and sugars.

Friday, January 8, 2010

Muscle loss during short-term fasting

This is an issue that often comes up in online health discussions, and was the topic of a conversation I had the other day with a friend about some of the benefits of intermittent fasting.

Can the benefits of intermittent fasting be achieved without muscle loss? The answer is “yes”, to the best of my knowledge.

Even if you are not interested in bulking up or becoming a bodybuilder, you probably want to keep the muscle tissue you have. As a norm, muscle takes a long time, and effort, to build. It is generally easier to lose muscle than it is to gain it. Fat, on the other hand, can be gained very easily.

Body fat percentage is positively correlated with measures of inflammation markers and the occurrence of various health problems. Since muscle tissue makes up lean body mass, which excludes fat, it is by definition negatively correlated with inflammation markers and health problems.

As muscle mass increases, so does health; as long as the increase in muscle mass is “natural” – i.e., not caused by things like steroids, for instance.

In short-term fasts (e.g., up to 24 h) one can indeed lose some muscle as the body produces glucose using muscle tissue through a process known as gluconeogenesis. In this sense, muscle is the body’s main reserve of glucose. Adipocytes are the body’s main reserves of fat.

Muscle loss is not pronounced in short-term fasts though. It occurs after the body’s glycogen reserves, particularly those in the liver, are significantly depleted. This often happens 8 to 12 hours into the fast, depending on how depleted the glycogen reserves are when one starts fasting.

When the body is running short on glycogen, it becomes increasingly reliant on fat as a source of energy, sparing muscle tissue. That is, it burns fat, often in the form of ketone bodies, which are byproducts of fat metabolism. This state is known as ketosis. There is evidence that ketosis is a more efficient state from a metabolic perspective (Taubes, 2007, provides a good summary), which may be why many people feel an increase in energy when they fast.

The brain also runs on fat (through ketone byproducts) while in ketosis, although it still needs some glucose to function properly. That is primarily where muscle tissue comes into the picture, to provide the glucose that the brain needs to function. While glucose can also be made from fat, more specifically a lipid component called glycerol, this usually happens only during very prolonged fasting and starvation.

You do not have to consume carbohydrates at all to make up for the glycogen depletion, after you break the fast. Dietary protein will do the job, as it is used in gluconeogenesis as well.

Dietary protein also leads to an insulin response, which is comparable to that elicited by glucose. The difference is that protein also leads to other hormonal responses that have a counterbalancing effect to insulin, by allowing for the body's use of fat as a source of energy. Insulin, by itself, promotes fat deposition and prevents fat release at the same time.

When practicing intermittent fasting, one can increase protein synthesis by doing resistance exercise (weight training, HIT), which tips the scale toward muscle growth, and away from muscle catabolism.

This may actually lead to significant muscle gain in the long term. Fasting itself promotes the secretion of hormones (e.g., growth hormone) that have anabolic effects.

The following sites focus on muscle gain through intermittent fasting; the bloggers are living proof that it works.


  http://leangains.com/

Muscle catabolism happens all the time, even in the absence of fasting. As with many tissues in the body (e.g., bones), muscle is continuously synthesized and degraded. Muscle tissue grows when that balance is tipped toward synthesis, and is lost otherwise.

Muscle will atrophy (i.e., be degraded) if not used, even if you are not fasting. In fact, you can eat a lot of protein and carbohydrates and still lose muscle. Just note what happens when an arm or a leg is immobilized in a cast for a long period of time.

Short-term fasting is healthy, probably because it happened frequently enough among our hominid ancestors to lead to selective pressures for metabolic and physiological solutions. Consequently, our body is designed to function well while fasting, and triggering those mechanisms correctly may promote overall health.

The relationship between fasting and health likely follows a nonlinear pattern, possibly an inverted U-curve pattern. It brings about benefits up until a point, after which some negative effects ensue.

Long-term fasting may cause severe heart problems, and eventually death, as the heart muscle is used by the body to produce glucose. Here the brain has precedence over the heart, so to speak.

Voluntary, and in some cases forced, short-term fasting was likely very common among our Stone Age ancestors; and consumption of large amounts of high glycemic index carbohydrates very uncommon (Boaz & Almquist, 2001).

References:

Boaz, N.T., & Almquist, A.J. (2001). Biological anthropology: A synthetic approach to human evolution. Upper Saddle River, NJ: Prentice Hall.

Taubes, G. (2007). Good calories, bad calories: Challenging the conventional wisdom on diet, weight control, and disease. New York, NY: Alfred A. Knopf.

Wednesday, January 6, 2010

Stop Measuring and Start Thinking











Sam Taylor-Wood, Still Life, 2001, Edition of 6, 35 mm Film/DVD
(Thanks to Sam Taylor-Wood for permission to use this image)

Saturday, January 2, 2010

Eating fish whole: Smelts

Since different parts of a fish have different types of nutrients that are important for our health, it makes sense to consume the fish whole. This is easier to do with small than big fish.

One of my favorite types of small fish is the smelt; the photo below shows a batch of smelts that I prepared using the recipe below. Another small fish favorite is the sardine. Small fish are usually low in the food chain, and thus have very low concentrations of metals that can be toxic to humans.


Many people dislike the taste of smelts, but will eat them if they are well seasoned and their texture is somewhat hard. Here is a recipe that will get you that.

- Steam cook the smelts for 30 minutes to 1 hour (less time = harder texture).
- Spread the steam cooked smelts on a sheet pan covered with aluminum foil; use light olive oil to prevent the fish from sticking to the foil.
- Preheat oven to 350 degrees Fahrenheit.
- Season the steam cooked smelts to taste; I suggest using salt, chili powder, garlic powder, and herbs.
- Bake the smelts for 30 minutes, turn the oven off, and leave them there for 1 hour.

There is no need to clean, or gut, the smelts for the recipe above. Since they feed primarily on plant matter, and have a very small digestive tract, there is not much to be “cleaned” off of them anyway.

They will be ready to store or eat cold. There are several variations of this recipe. For example, you can bake them for 40 minutes, and then serve them hot.

Friday, January 1, 2010

How to differentiate between a B12 and a folate deficiency

Despite whether or not megaloblastic anemia is caused by a deficiency of folate or vitamin B12 (cobalamin), large doses of folate will correct the anemia (1). Because this is the case, the extra folate can potentially "mask" symptoms of vitamin B12 deficiency such as from pernicious anemia.

Unfortunately, an undiagnosed chronic vitamin B12 deficiency can lead to irreversible neuropathy. The cobalamin in methyl derivative form is necessary to methylate homocysteine to methionine (2). It's also necessary to convert methylmalonyl CoA to succinyl coA. In the absence of B12, then, leads to accumulation of both methylmalonic acid and homocysteine levels (2). As they accumulate, they lead to possible neuropathy via irreversible demyelination of nerves (3).

The mechanism by which this occurs is thought to be related to methylmalonyl CoA acting as an inhibitor of malonyl CoA's role in biosynthesis of fatty acids, which leads to myelin sheath degeneration (3). However, because this does not explain why both homocysteine and methylmalonic acid must be elevated for demyelination, more research is needed.

Correct treatment can depend on telling the difference between a deficiency of B12 from folate. It can be achieved through an assessment of both methylmalonic acid and homocysteine blood levels (2 & 4). A clinician can determine that an elevated level of both will indicate a B12 deficiency in tissues (4). Further, if both are normal, no B12 deficiency exists; and if only homocysteine levels are elevated, then a possible folate deficiency may exist (4).

References

1. Gropper SS, Smith JL, Groff JL. Advanced Nutrition and Human Metabolism. Belmont, CA: Thomson Wadsworth, 2009.
2. Devlin TM. Textbook of Biochemistry with Clinical Correlations. Philadelphia: Wiley-Liss, 2002
3. Pagana, K.D., Pagana, T.J. Mostby's Manual of Diagnostic and Laboratory Tests, 3rd ed. Mosby Elsvier, 2006
4. Lab tests online. Methylmalonic acid. Available at: http://www.labtestsonline.org/understanding/analytes/mma/test.html

Saturated fat intake not associated with heart disease – Dr. Cordain’s article

I would like to comment on a recent article co-authored by Dr. Loren Cordain, and published in the journal Current Treatment Options in Cardiovascular Medicine, in 2009. Dr. Cordain is probably the leading expert today on the diet of our Stone Age ancestors.

The importance of this article comes from the fact that in the past Dr. Cordain has argued that our Stone Age ancestors have not consumed large amounts of saturated fat, because of the relatively low percentage of fat in the flesh of wild animals. This led, according to Dr. Cordain, to an evolved body design that is not well adapted to the consumption of significant amounts of saturated fat.

Yet, many other researchers have argued that saturated fats are beneficial to our health, with ample empirical evidence to back up their statements. The researchers at the Weston A. Price Foundation have been particularly prominent voices in favor of saturate fats.

Now, this acknowledgement that saturated fats (or saturated fatty acids) are not detrimental to health, particularly heart health, was made with qualifications. And, Dr. Cordain is not the first author of the article. Page 293 of the article states that:
Replacement of SFAs, especially palmitate, with MUFAs may provide moderate cardiometabolic benefits, and is unlikely to do harm. However, SFA reduction does not appear to be the most important dietary modification for CHD risk reduction.
(Notes: SFA=saturated fatty acids=saturated fat, think greasy steak and egg yolk; MUFAs=monounsaturated fatty acids, think olive oil and lard; CHD=coronary heart disease.)

Palmitate refers to palmitic acid, of which meat, butter, eggs, and dark chocolate are all good sources. Even salmon is a good source of palmitic acid, although it is also an excellent source of DHA and EPA omega-3 fat acids. EPA is eicosapentaenoic acid, and DHA is docosahexaenoic acid; both of which are found in fish.

So, the caution in the statement above does not make much sense given the mounting evidence that palmitic acid (especially when consumed with a low carb. diet, in my view), may have cardio-protective effects.

Nevertheless, this is a major shift from Dr. Cordain’s previous position that saturated fats cannot be part of a healthy diet because they do not fit well with what we currently know about the diet of our Stone Age ancestors.

Maybe those ancestors ate a lot of saturated fat after all, and that consumption led to adaptations that make saturated fat consumption healthy; again, in my view, as long as it is not accompanied by high consumption of refined carbs. and sugars.

Saturated fat was probably the most readily available type of fat to those ancestors, a rich source of calories, and virtually impossible to avoid given the main component of those ancestors’ diet – meat.

Intermittent fasting and reduced inflammation

A recent post on the Primal Wisdom blog led me to do go back to some of the research on an approach to dieting that I tried myself, with some positive results. The approach is known as intermittent fasting (IF). I also found an excellent blog post by Dr. Michael Eades on IF (see here).

Typically IF involves fasting every other day. On the non-fasting days, food and water consumption is not restricted in any way. On fasting days, only water is consumed. Variations of this approach usually involve replacing water with juice, and having an eating window of only a few hours within longer periods – e.g., fasting 19 hours and then eating during a window of 5 hours, for each period of 24 hours.

IF is different from calorie restriction (CR), in that in the latter total daily calorie intake is restricted to a somewhat fixed amount, below one’s basal metabolic rate (the number of calories needed to maintain one’s current weight). In CR the calorie restriction is not normally achieved through fasting, but through careful portion size control and selection of foods based on calorie content. Having said that, some prominent CR practitioners also practice IF.

One interesting aspect of IF studies is that often they do not involve any calorie reduction in the participants' diet; that is, individuals consume the same amount of calories that they would if they were not fasting at all. In other words, they consume 2X outside their fasting window; where X would be their normal caloric consumption without fasting.

Yet, the benefits of IF are still achieved. For example, during Ramadan, the levels of inflammation markers and factors, such as C-reactive protein (CRP) and homocysteine, go down, and remain low for several weeks after IF is interrupted. These inflammation markers and factors are known to be strongly associated with heart disease.

In fact, animal studies suggest that virtually identical benefits can be obtained through IF in terms of increased lifespan and disease resistance, as those normally associated with CR. Again, this is somewhat surprising because often IF does not involve any reduction in calories consumed.

Fasting promotes increased levels of growth hormone in humans. A decline in growth hormone levels is associated with aging. Thus, increased circulating growth hormones may be one of the mechanisms by which IF may affect lifespan.

There have been some reports of IF being associated with negative effects on health, but I suspect that they are associated with gorging on refined carbohydrates and sugars during the eating window. Refined carbohydrates and sugars promote inflammation, and IF reduces inflammation. It is conceivable that a very high consumption of refined carbohydrates and sugars during the eating window may completely negate the benefits of IF, particularly if one is doing a half-hearted version of IF to start with.

A combination of IF and a diet low in refined carbohydrates and sugars probably makes sense in terms of our evolved physiology. Our Stone Age ancestors had to fast on a regular basis, based on the availability of food – there were no refrigerators or grocery stores during the vast majority of our evolutionary history as a species. When food was available, it was consumed to satiety. In other words, our Stone Age ancestors practiced IF, against their will. Because of that, this is the state in which our body evolved to operate optimally.

If you watch enough episodes of the TV show Survivorman, you will probably notice that it is very unlikely that our Stone Age ancestors had access to enough calories to survive on plant foods only, assuming that they faced problems similar to those in the show.

Our digestive tract has evolved over millions of years from a mostly vegetarian diet, practiced by our Australopithecine ancestors, to a primarily carnivorous diet, adopted by human ancestors as far back as Homo erectus, and probably Homo habilis. Given that, only the recent invention of refined carbohydrates and sugars has given us access to enough dense carbohydrate sources of calories.

So, a combination of IF and a diet low in (or devoid of) refined carbohydrates and sugars makes evolutionary sense, and is probably why so many people who adopt Paleolithic diets see so many improvements in health markers such as inflammation markers, blood pressure, and HDL cholesterol.