Monthly Archives: September 2012

The Tallest Women In History And Today, Zhen Jinlian And Yao Defen

I used to write up quite a few of these nice interesting articles that talks about the giants and the extreme bodies that have existed throughout human history. This will be a continuation of those series. So who is the tallest women in history?

Barring any mythical legends of lost tribes and civilizations, the tallest women in recorded history is Zeng Jinlian who doctors have calculated was 8 feet 1.75 inches (248.3 cm) tall if her severe spinal curvature was adjusted to be fully straight. That would make her the only female that has passed the 8 feet tall mark.

From the Tallest Man website HERE, the story of her life.

Zeng Jinlian (June 26, 1964 – February 13, 1982) was the tallest female ever recorded in medical history, taking Jane Bunford’s record. She is also the only female counted among the 16 individuals in medical history who reached a verified eight feet or more. At the time of her death at the age of 17, in Hunan, China, she was 8 ft 1.75 in (248.3 cm) tall. However, she could not stand up straight due to a severely deformed spine.

Nevertheless, she was the tallest person in the world at the time. In the year between the death of 8 feet 2 inch Don Koehler and her own, she surpassed fellow ‘eight-footers’ Gabriel Estavao Monjane and Suleiman Ali Nashnush. She was the second woman in recorded history to become the tallest person in the world. Jane Bunford was the first.

From Wikipedia HERE

Zeng Jinlian (simplified Chinese: 曾金莲; traditional Chinese: 曾金蓮; pinyin: Zēng Jīnlián; June 25, 1965 – February 13, 1982) was the tallest woman ever verified in medical history, taking over Jane Bunford’s record. Zeng is also the only woman counted among the fourteen people who have reached verified heights of eight feet or more.

At the time of her death at the age of 17 in Hunan, China, Zeng, who would have been 8 feet, 1.75 inches (249 cm.) tall (she could not stand straight because of a severely deformed spine), was the tallest person in the world. In the year between Don Koehler’s death and her own, she surpassed fellow “eight-footers” Gabriel Estêvão Monjane and Suleiman Ali Nashnush. That Zeng’s growth patterns mirrored those of Robert Wadlow is shown in the table below.

Age Height of Zeng Jinlian Height of Robert Wadlow
4 5 feet 1.5 inches 5 feet 4 inches
13 7 feet 1.5 inches 7 feet 4 inches
16 7 feet 10.5 inches 7 feet 10.5 inches
17 8 feet 1.75 inches 8 feet 1.5 inches

As for the tallest women today who is still alive, that would go to Yao Defen, who stands at somewhere between 7′ 8″ and 7′ 9″, depending on which resource you wanted to use.


From the Wikipedia article HERE….

Yao Defen (Chinese: 姚德芬; pinyin: Yáo Défēn) of China, (born July 15, 1972) is the tallest living woman, as recognized byGuinness World Records.[2] She stands 7 ft 8 in tall (2.33 m), weighs 200 kilograms (440 lb), and has size 26 (UK) / 78 (EU) feet.[3][4][5] Her gigantism is due to a tumor in her pituitary gland.

Yao Defen was born to poor farmers in the town of Liuan in the Anhui province of Shucheng County. At birth she weighed 6.16 pounds. At the age of three years she was eating more than three times the amount of food that other three-year-old children were eating. When she was eleven years old she was about six feet, two inches tall. She was six feet nine inches tall by the age of fifteen years.[edit]Early life

The story of this “woman giant” began to spread rapidly after she went to see a doctor at the age of fifteen years for an illness. Medical doctor(who also saw her after years) properly diagnosed the illness, however decided not to cure her, because she and her family didn’t had 4000 yuan for the surgery(despite the official communism and socialism claims People’s Republic of China don’t have(and didn’t had) “free” or “social” public health service, except for high government officials). [6]After that, many companies attempted to train her to be a sports star. The plans were abandoned, however, because Defen was too weak. Because she is illiterate, since 1992 Yao Defen has been forced to earn a living by traveling with her father and performing.

Yao Defen’s giant stature was caused by a large tumor in the pituitary gland of her brain, which was releasing too much growth hormone and caused excessive growth in her bones. Six years ago, a hospital in Guangzhou Province removed the tumor, and she stopped growing.

The tumor returned and she was treated in Shanghai in 2007, but was sent home for six months with the hope that medication would reduce her tumor enough to allow surgery. The second surgery was never performed due to lack of funds.

In 2009, the TLC cable TV network devoted a whole night’s show to her. She suffered from a fall in her home and had internal bleeding of the brain. She recovered and felt some happiness after a visit from China’s tallest man, Zhang Juncai.

Medical help

A British television program filmed a documentary on her and helped raise money so she could get proper medical care. They measured her and according to the documentary she is seven feet, eight inches tall. Two leading doctors in acromegaly agreed to help Yao. She was taken to a nearby city hospital, where imaging procedures revealed that a small portion of her tumor, thought to have been removed many years before, still remained, causing continuing problems including weakening vision as it pressed against her optic nerve. She returned home, then was admitted for a month under observation in the larger Shanghai Ruijin Hospital, and given dietary supplements. In that hospital, her growth hormone was greatly slowed down, although it is still a problem. Upon her return home to her mother and brother, she was able to walk with crutches, unassisted by others, and was given a six-month supply of medicines and supplements in hopes of improving her condition enough to undergo surgery.

Acromegaly

Yao currently suffers from hypertension, heart disease, poor nutrition, and osteoporosis. Acromegaly often results from a tumor within the pituitary gland that causes excess growth hormone secretion. As a result, the body’s features become enlarged. It can also delay the onset of puberty as is the case with Yao. She has nosecondary sexual characteristics. Potential complication lacking surgery includes blindness and eventually premature death.

She lives near her mother (who is only four feet, eight inches tall) in a small village in rural China.

From the Tallest Man website HERE

Yao Defen – 7 feet 8 inches (233.7 cm)

Yao Defen of China, (born 15 July 1972) claims to be the tallest woman in the world. The Guinness Book of World Records listed the American giantess Sandy Allen was the world’s tallest woman until her death on 13 August 2008, but disputed Yao Defen’s claim until the 2011 edition of the Guinness World Records. She weighs 200 kg (440 lbs) and has size 57 (EU) (around 20 US) feet. Her gigantism is due to a tumor in her pituitary gland.

Yao Defen – Early life

Yao Defen was born to poor farmers in the town of Liuan in the Anhui province of Shucheng County. At birth she weighed 6.16 pounds. At age 3 she was eating more than three times the amount of food that other three-year-olds were eating. When she was 11 years old she was about 6 foot 2 inches tall. She was 6 foot 8 inches tall by the age of 15. The story of this “woman giant” began to spread rapidly after she went to see a doctor at age 15 for an illness. After that, many companies attempted to train her to be a sports star. The plans were abandoned, however, because Yao Defen was too weak. Because she is illiterate, since 1992 Yao Defen has been forced to earn a living by traveling with her father and performing.

Yao Defen – Pituitary Gland

Yao Defen’s giant stature was caused by a large tumor in the pituitary gland of her brain, which was releasing too much growth hormone and caused excessive growth in her bones. Six years ago, a hospital in Guangzhou Province removed the tumor, and she stopped growing. The tumor returned and she was treated in Shanghai in 2007, but was sent home for 6 months with the hope that medication would reduce her tumor enough to allow surgery. It remains unknown if the second surgery was ever performed.

Yao Defen – measured at 7 feet 9 inches tall?

A British television programme filmed a documentary on her and helped raise money so she could get proper medical care. They did measure her and according to the documentary she even was 7 feet 9 inches tall. Two leading doctors in acromegaly agreed to help Yao. She was taken to a nearby city hospital, where imaging procedures revealed that a small portion of her tumor, removed many years before, still remained, causing continuing problems, including weakening vision as it pressed against her optic nerve. She returned home, then was admitted for a month under observation in the larger Shanghai Ruijin Hospital, and given dietary supplements. In that hospital, her growth hormone was greatly slowed down, although it is still a problem. Upon her return home to her mother and brother, she was able to walk with crutches, unassisted by others, and was given a six-month supply of medicines and supplements in hopes of improving her condition enough to undergo surgery.

Yao Defen – Ill Health

Yao Defen currently suffers from hypertension, heart disease, poor nutrition, and osteoporosis. Acromegaly often results from a tumor within the pituitary gland that causes excess growth hormone secretion. As a result, the body’s features become enlarged. It can also delay the onset of puberty as is the case with Yao Defen. She has no secondary sex characteristics. Potential complications without necessary surgery include blindness and eventually premature death.

Yao Defen – Dead or Alive?

She lives with her mother (who is only 4 ft 8 inches tall) in a small village in rural China. There are many rumours on the internet saying that Yao Defen has died. It is indeed next to impossible to find any information about Yao Defen after 2009, but I think it would have been all over the internet if she is no longer alive. So for the time being, until more information comes available, I assume she is still alive. I hope so, anyway.

 

Supplement Formulas To Increase Height And Grow Taller

Something I remember seeing a lot of on height increase forums are people who decide to create a type of stack or supplement combination they would try to take to increase their height.

This approach for height is done by people with open and closed growth plates. When themiracle.com was still active the idea of stacking supplements seem to be very common. I somehow recently read somewhere that the old GrowTallForum by Hakker was available at some level on the WayBack Machine. This was what lead me to really see what kinds of supplements people from the past were trying out.
From a first glance it would seem that the research was very intensive, with the focus on being on trying three things.

  • Increase the HGH release rate
  • Increase the CNP release levels to get the Nitric Oxide to be increased
  • Developing Chondrogenic properties.

Currently I have not looked into the reason why these supplements were cycled but all I will be doing in this post is to list as many of the considered height increase supplements as possible. I would guess that the major reason one cycles the supplements is not to overdose on a supplement and saturate the body too much and loss its effectiveness in making a real change.

Common supplements that I have seen being used in different combinations and permutations are.

  • Glucosamine Sulfate
  • Chondroitin Sulfate
  • Melatonin
  • Methylprotodiosciin and icariin (Dimethyl)
  • acteoside/harpegoside
  • Niacin (Vitamin B3)
  • Ghenerate
  • Hexarelin
  • CJC 1295
  • GHRP-2
  • Sam-e
  • Folinic acid
  • Folic acid
  • Proviron
  • MSM
  • GPLC
  • Methyl 1-D
  • I-GH1
  • Tribulus
  • Multi vitamins + extra calcium
  • L-Dopa
  • Triazole
  • L-arginine
  • L-ornithine
  • GABA.
  • Calcium
  • Vitamin D3
  • E-Bol
  • Pyridostigmine
  • Hyaluronic Acid
  • Huperzine A
  • Zinc
  • Anastrazole/letrozole
  • Bicalutamide
  • And so many more ideas and supplement names….

Conclusion: All of the supplements I have looked at in my research up to this point have almost all been talked about and researched already by the people who actually did try out the supplement cycles in GrowTallForum.com. I recently used the WayBack Machine to see why so many people have been interested in getting into contact with Hakker/James and wanting to ask him more questions. After seeing just how big the forum was, which is rather immense, I had sent a email inquring whether James would be willing to reactivate the forum so some of the people like us height increase researchers can go back to scour through the old posts and threads so that we can do our own analysis and interpretation. Since there has been no reply back to me I guess James/Hakker has finally and permanently decided to move on and never get back into this area of research again. I understand his thoughts and reasoning.

The supplements so far are too many for me to analyze at this time. At some point in the next few years after the initial structure and design of this website is finished with some of the most common issues, ideas, and questions fully discussed and analyzed, I will try to take the time and go through each of the supplements individually and look at the possible feasibility of the supplements. Some of them I have only glanced at but a lot of them I have never seen before.

What is clear from the 5th podcast episode with Hakker is that if you do have open growth plates, some of the supplements will work. He had said that the M.E.N.S. routine did have results for people who were young enough to still have growth plates. So if you have open growth plates, there is a lot of potential and a high chance for success in using the supplements.

More Fashion Ideas To Dress Better And Look Taller

I thought it was appropriate to talk more about fashion ideas and tips to look taller since one’s appearance and how others judge you will be more than just your height. Appearance does matter and dressing well, clean, and stylish can create the illusion of one being taller than one’s actual height, by upwards of even 3-4 inches at right angles.

from Yahoo! Voices (link HERE)..

10 Ways You Can Dress to Look Taller

Jodi Morse, Yahoo! Contributor Network
Aug 2, 2007 “Share your voice on Yahoo! websites. Start Here.”
When you are short, figuring out what to wear can be pretty difficult – especially if you want to make yourself look a bit taller than you really are. Well, the most important thing for you to keep in mind is that it is possible for you to make yourself look taller than you really are, according to the way that you dress. Here, I will share with you some of the tips that I have learned through my own experiences as a short person.Tip # 1 For Looking Taller: Tighter is Often Better

You may not be completely thrilled with the idea of wearing tight clothing, as it can be pretty uncomfortable. However, tighter is typically better. The reason may be because it helps emphasize your lines, ultimately helping you look taller. When you wear baggier clothing, you will probably find that your lines will not be so visible, which will ultimately cause you to look shorter. The only instance in which bagginess is an exception when it comes to dressing as a short person is when it comes to shirts and jeans. When you tuck a baggy shirt into tighter jeans, you will notice that your legs will look just as long (or tall) as they would if you were to wear a tighter shirt.

Tip # 2 For Looking Taller: Avoid Wearing Capris or Crop Shorts

Chances are that you have always loved the appearance of Capris or Crop shorts. If you are aiming to look taller, however, wearing both of these types of shorts is not going to help. For some reason, they tend to make short people’s legs look even shorter and stubby. You may be wondering what you should wear if you are not comfortable wearing short shorts. Well, your best option is to probably try out Bermuda shorts. Since these shorts will typically cutoff right before the knee begins, or in some cases right below the knee, your legs will get an overall longer appearance. Keep in mind that the same rule applies to skirts; those that end close to the ankles will make you look shorter, whereas those that end closer to the knee will help elongate your legs.

Tip # 3 For Looking Taller: Don’t Wear Flat Shoes

Lately, flat shoes have been considered very fashionable. Unfortunately, if you are really interested in looking taller, flat shoes are not the best choice for you. These shoes will be likely to draw attention to your feet, which will look very flat in them. What this will do is emphasize the fact that you are short, and in some cases, you may even look shorter because of these shoes. Opting for medium heels (as heels that are too high can just cause you to look like you are trying too hard) or chunkier shoes is typically the best option.

Tip # 4 For Looking Taller: Forget Everything You’ve Heard About Skinny Jeans

You have probably always been told that you should avoid skinny jeans because they can cause your legs to look chubbier and even make you look shorter. While this can be true in some cases, it is important to keep in mind that it is not always true. If you have long or slender legs, skinny jeans can help emphasize this. Darker colored skinny jeans are often the most complementary. One of the main reasons that skinny jeans may add length to your appearance is because they tend to add a slimming look, which is especially true about darker colored skinny jeans. Whether you like the idea or not, you should at least try a few pairs of skinny jeans on. You may be surprised with the results that you see.

Tip # 5 For Looking Taller: Wear Long Shirts

Wearing shirts that are longer in length can cause your torso to look longer. This can add several “visual inches” to your entire body. It is most common for tank tops and tee-shirts to be sold in longer lengths. Lately, however, you will typically be able to find long shirts for all seasons. (Once again, keep in mind that tighter is better. A long, baggy shirt may cause your body to look both shorter and chubbier).

Tip # 6 For Looking Taller: Opt For Darker Colored Clothing

It is a well known fact that being slimmer actually causes you to look taller also. Unfortunately, not all of us are as slim as we would like to be. You have probably already heard that wearing darker colored clothing can actually make you look slimmer. Opting for colors such as black, navy blue, forest green, maroon, and chocolate brown can make you look slimmer – and ultimately taller. Opting for darker colored tops and pants will both add length to your appearance.

Tip # 7 For Looking Taller: Use Layering to its Advantage

Although you may not even realize it, when it is done right, layering can help your body look a lot taller. Of course, there are several things that you will want to keep in mind if you plan to layer. The first thing is that the darker colored shirt should always go on the outside, as this is the color that people will be the most focused on when they look at you. Another thing that you should keep in mind is the type of shirts that you are laying. It is often best to layer a v-neck shirt on top of a tank top, mainly because a v-neck shirt tends to add a longer appearance to the body. You should avoid layering ruffled shirts, mainly because they only add emphasis to the chest and neck area, rather than the rest of your body. Test out several shirts when layering to see what makes you look the tallest.

Tip # 8 For Looking Taller: Choose the Right Hair Style

Although you probably think that your hair style has no effect on how tall you look, it can. Longer hair styles actually can cause you to look shorter because they cause the eye to wander mainly to your hair, rather than your body. Opting for shorter hair styles will provide you with the tallest appearance because it will put more emphasis on your neck, rather than your hair. This is especially true if your neck is longer. Medium hair styles are also acceptable if you wish to look taller.

Tip # 9 For Looking Taller: Test Out Different Types of Stripes

Stripes are known to provide the visual length that you are hoping for. Most short people find that vertical stripes work the best for them. However, there are cases in which people also claim that horizontal stripes work wonders for them also. The main key is to try out a few different types of stripes and decide what types may help you look taller, as well as what types may cause you to look shorter. Base your future clothing decisions on the results that you get.

Tip # 10 For Looking Taller: Emphasize With Bright Colors

Is there one part of your body that you really want to emphasize which will help you look taller? For example, let’s pretend that you have long, slender legs. If you want to use your legs to make your whole body look taller, then you should try shopping for bright colored pants, whether they are red, orange or lime green. If you have a slim stomach, you may want to emphasize this using a brightly colored shirt, whether it is solid or graphics. No matter what you decide on emphasizing, realize that bright colored clothing is probably the way to go. It is often best to match that with dark colored clothing (e.g. a bright pink shirt with dark colored jeans).

These are just ten of the very easy things that you can do when dressing in order to help yourself look taller! Remember that being short is not always a bad thing though! Being happy with who you are and showing confidence will help you look taller in itself!

Published by Jodi Morse – Featured Contributor in Health & Wellness

Jodi Morse has a B.A. in English from East Stroudsburg University. As a sufferer of endometriosis, she enjoys writing about women s health issues. She has worked as an assistant wedding DJ, and is dabbling i…

Me: Now, I know I have already posted this before in a previous post but I wanted to post this again, in case the readers did not get the post the first time. Dress better and you look better, taller, and people will think you are taller. This post was found on the Art Of Manliness website HERE.

Dressing Taller: 10 Tips for Short Men

by ANTONIO on JUNE 7, 2011 · 42 COMMENTS

in DRESS & GROOMING, GILLETTE, STYLE

This series is supported by Gillette. Learn more about Gillette and its products at Gillette.com.What’s this?

Short men have always had a tougher row to hoe than their taller fellows. It can be frustrating to be picked last for the pick-up basketball game, to feel like you’re overlooked when walking into a party, and to struggle to see your favorite band at a concert.

And then there are those studies that say that tall men are perceived as more powerful and better leaders, are more desirable to women, and make more money (almost $1,000 more for every inch of height).

But short men shouldn’t despair. The news isn’t all bad.

First, while height may give men a leg up in the race for success (US presidents have been on average 4 inches taller than the general male population), there are always exceptions to the rule. Andrew Carnegie (5’0”)! Martin Luther King Jr. (5’7”)! Harry Houdini (5’5”)! TE Lawrence (5’6”)! Robert Reich (4’10’)! And have you seen Dennis Kucinich’s wife?

And second, while there isn’t much you can do, short of gruesomely lengthening your bones, to physically increase your height, there are ways to appear taller.  Key in this is the way you dress and present yourself, and today we’ll share ten tips on how you can use style to enhance your stature, and perhaps more importantly, your confidence.

The  Guiding Rule – Always Streamline Your Look

Looking taller is all about getting viewers’ eyes to travel smoothly up your body. It’s pure illusion: the more their eyes have to sweep upward, the taller their brains will register whatever they’re looking at as being.

That means that a shorter man wants to ease and encourage the viewer’s eyes upward towards his face. Visual clutter–such as eye-grabbing stuff on the body–breaks up the impression of height. That means staying away from obvious accessories like big, chunky watches, but it also means keeping an eye out for things as simple as the pockets on your suits and shirts. Something as simple as a pocket flap instead of an unadorned slit pocket can clutter up your appearance and lessen the impression of height.

10 Tips on Dressing Taller

FYI – I put these ten tips in orders of practicality and cost.  I realize some of these are beyond some men’s resources or not options worth considering–but I lay them out there so that you can make that decision yourself.

1. Monochromatic Color Themes

Along the same lines as minimizing visual clutter, removing contrasting color from your appearance helps streamline the way you look. Keeping all your clothes within a fairly consistent color theme, especially a dark one, will create an illusion of height. Different color shades are fine–just try to keep it loosely monochrome.

When you do wear different colors or different shades of the same color, try to weight the darker colors toward the bottom half of your body. That way people’s attention starts down near your feet and travels upward. Dark trousers with a lighter shirt create a lengthening effect; a darker shirt with lighter pants shortens your appearance.

2. Wear Vertically-Oriented Patterns

Most people have heard that vertical stripes are “slimming” and horizontal stripes are “widening.” That’s just a simplification of the same visual effect we’ve already been talking about: where people’s eyes go when they look at you. Patterns that run horizontally make you seem wider because the eye wants to follow them naturally out to the sides of your body.

Unbroken vertical stripes are one of the best ways to add an impression of height without seeming to try for it.  Dress shirts that increase the perception of height ideally have striping that is narrow enough to not create broad empty spaces of monochrome but wide enough to be visible at a glance. The equal-width alternation of white and colored stripes–often called candystriping–is a good choice.

Textured cloth with a visible up-and-down pattern has the same effect as any other vertical striping, so corduroy or very narrow herringbone weaves are also worth working into the wardrobe. Other than those very definitively vertical textures, however, stick to smoother fabrics where possible — rough textures add the visual clutter you want to avoid.

 

 

3. Wear Close Fitting Clothing

A loose fit on a short man actually emphasizes his petite frame–it makes him look sloppy, and it signals that he’s too small to find clothing that fits him right. Don’t let your own clothing send this message to the world.

When shopping for menswear, pay close attention to where your clothing sits on your body when you try it on. Most men are used to wearing clothing that is 1 to 2 sizes too large on them, and smaller men who have never given it much attention are some of the worst offenders.

Steer clear of jackets that hang loose in the armpits, even if the sleeves are short enough for your arms, and avoid any trousers with a lot of slack cloth in the crotch. Trust me, this doesn’t make you look more endowed. Instead, that sort of bagginess leads straight to the stereotypical “kid in his father’s suit” look.

Remember that most menswear is deliberately cut loose to accommodate as many body types as possible. Clothing marked small isn’t made for one type of small; it’s often made to try to accommodate shorter men who are anything from stout to round to thin. And the results are rarely flattering.

Savvy short shoppers often find a brand, oftentimes from a particular designer, that consistently suits them. They do this because designer clothing is often built for a narrower variety of body types, and as a result accommodates those limited builds better than the one size made to fit all variety. Designer clothes generally cost a bit more, but carefully watching sales and knowing when and where to shop for your particular size can lead to savings that make buying higher end clothing affordable.

Finally, have a trusted tailor who you can take your clothing to. Ensure he has an understanding of proportion and the needs of your body type, and you’ll find the adjustments he makes can transform your look more than any of the other tips in this article. It’s relatively inexpensive to have sleeves or cuffs shortened; more complicated work like having your trousers slimmed or jacket torso tightened isn’t too expensive either. Having a jacket shortened, or adjusting shoulders on a shirt is often limited by proportion–but again these small adjustments will transform your look from dopey to dashing.

4. Smaller Proportions

Be aware that as a smaller man you won’t always want the exact same proportions in your clothes as other men. For example, it’s traditional to wear a sport coat or suit cut so that a half-inch or so of shirt cuff shows beyond the end of the sleeve. A shorter man, however, wants to pair shirts and jackets so that there’s less of a broad band–as little as a quarter-inch. A sliver of cloth color down around the wrists will look more proportional on shorter arms than ¾ of an inch.

The parts of your clothing that fold over one another contribute a lot to your visual effect. On your upper body, that usually means the shirt collar and the jacket lapel, if a jacket is worn. Try to keep both of those on the narrower side–though be cautious with lapels; jackets with very broad or very wide lapels run the risk of looking dated, depending on when that particular extreme was in fashion.

Collars with shorter points that aim downward help as well. Stay away from anything with an extreme spread (more than 120 degrees) or longer collar points (2.5+ inches), especially when the collar points are angled dramatically outward.

Your necktie should be on the slimmer side as well, particularly if you have a smaller torso; if your torso is very broad, a narrow tie may start to look undersized. However, this is a better problem than overemphasizing the latter.

It may seem like splitting hairs to recommend narrower collar spreads, shorter trouser cuffs (or no cuffs at all), 2 or 1 button jackets, thinner lapels, and pockets closer together on a jacket. But when you start combining all the usual elements of a piece of clothing in smaller proportions, the effects add up. A small difference here, a small improvement there–next thing you know you have a significantly improved look.

Most of these details are things that different companies do in their own style–you don’t need lots of expensive tailoring, just the patience to figure out which brands have the smaller, more vertically-tilted details that work best for you.

5. Wear Attention Grabbing Details Up High

You can keep attention moving up from your feet toward your head by weighting the brightest details at the top of your body. A pocket square or a brightly-colored tie help guide the eye’s motion upward. Just be careful of adding too much clutter all at once. A bright lapel pin on its own is helpful–worn at the same time as a patterned tie and a pocket square, it edges into the distracting category. More casual outfits can utilize details such as epaulets on a shirt’s shoulders or a contrast inner collar on a dress shirt.

Resist the temptation to add a few inches with a hat unless you regularly wear one–if not worn naturally or with confidence it can backfire on the wearer. Some even argue that the visual effect is actually shortening–a hat puts a “lid” on your body and stops the viewer’s gaze dead. I have seen it work both ways. Again, this is an attention-getting detail that takes confidence, practice, and the knowledge of which hat compliments you.

Always keep it simple, vertically-oriented, and limited to one or two extras at most.

6. Wear the Right Clothing

Wear a Jacket – Wearing a sport jacket or suit jacket builds up the shoulders–taller and more pronounced shoulders emphasize height. Use this to your advantage every chance you can and match the jacket with either trousers of the same fabric (suit) or trousers of a similar shade (sport jacket).  Again–know how to buy the right type of suit for maximizing height by following the guidelines in this article.

Trousers at the Waist – Shorter men benefit from a longer leg line, and you get a longer trouser leg by wearing the waistband higher. Wear your pants at the natural waist rather than down on the hips which only makes your legs look stubby. Trousers at the natural waist don’t need a belt cinched tight the way that they do on the hips, which helps your middle from looking distractingly pinched. For the best effect, wear trousers without belt loops and use suspenders.

Avoid Shorts and Short-Sleeved Shirts – Short men are short because their limbs are smaller than those of their tall counterparts. Wearing clothing that draws attention to your limbs, especially if you’re big or built, makes you look shorter because your limbs are proportionally more compact. Although not always practical–especially in the summer–a man on the short side should consider linen trousers and lightweight long sleeve shirts he can roll up on the forearm. A classier look that helps create a streamlined appearance.

7. Physically Add Height

Playing around with patterns and collar sizes and details are all good ways to make a combined impression of extra height. But what if you actually want to add real height?

It’s doable. But remember to do this in moderation. Some short men find it useful to wear a heeled shoe, and there are definitely styles that look fine with a half-inch or so of heel on them, but know what you’re buying. Manufacturers that advertise specifically as “for short men” are often slapping chunky heels on styles meant to be worn with a more moderate heel, and the result is eye-catching and tacky. Stick to black pumps for a formal look or heeled boots in more casual situations. And always avoid athletic shoes or regular dress shoes that come with an exaggerated heel–you’ll just end up tripping.

Heel inserts are a matter of personal preference. They add height but can be uncomfortable, and it can be embarrassing to have to take your shoes off in public if you have inserts. Definitely don’t wear them with an already thick-heeled shoe–you’ll end up tilted forward like a woman in high heels.

8. Shop Internationally

Mass manufactured clothing is made for specific regions based off taste and average target customer size. As such, American clothing is big; however, there are regions outside the ole USA that make clothing for a smaller demographic. Think Japan & Italy–two countries where style is at the forefront and clothing is manufactured for a man who is much smaller than the average American frame.

The internet has made it possible to get clothing from overseas without a trip yourself–the downside is that international shipping isn’t always cheap and many of the best online stores in Italy or Japan do not have an English storefront. Google translate helps–but it doesn’t translate size, especially when you’re trying to figure out what equals what–inches to centimeters, and then you have to account for brand variation! If you go this route, try to work with a merchant with excellent customer service or a website that gives you exact measurements of the garment you’ll be sent. Start slowly, ensure you get the fit right, and then buy in bulk to save on the shipping!

Ideally though, you’d be able to travel to the country and find the deals yourself, getting a closet full of great clothes and a memorable experience.

9. Visit the Young Man’s Department

There is great clothing to be found in the “Youth” section of American stores. Some styles obviously won’t work on an adult, but there’s a good number of clothing manufacturers who make scaled-down versions of perfectly presentable adult outfits.

The biggest challenge of the Youth/Boys department may turn out to be fit in the chest and stomach. Most adult men wearing youth sizes need an XL or a L, which have recently started to be made looser and looser. “XL” for a child carries an expectation of weight as well as height, which wasn’t as true ten or fifteen years ago–you may need to seek out long-established and more old-fashioned manufacturers to find youth-sized clothing that’s long enough for a short adult and also not cut for a very heavyset kid.

An added bonus is that these clothes are oftentimes value priced. If you’re small enough to fit clothing marketed for children and young adults, it’s worth the minor hit to the pride to browse the children’s section of a few high-quality clothing or department stores.

10. Go Custom or Buy from a Specialty Store

Seeking out a custom men’s clothier or short clothing specialist who can help optimize your look is an option many men take. They realize a second set of eyes and years of experience dealing with hundreds of men with similar problems gives a clothier expert status; the best study their craft and can build entire wardrobes for their clients that not only make them look taller but are interchangeable and functional for maximum wear.

Finally, keep your look natural.  By this I mean you have to be comfortable in your clothing – wear IT, don’t let it wear YOU.  There are a lot of tips in this post…DO NOT implement all of them into a single outfit. Instead pick a few and apply them in moderation over the next few months. Keep the ones that work, discard the tips that don’t.

And remember that being a sharp dressed man is all about confidence. Know who you are and have fun expressing that individuality with your personal style.

Theories On Delaying Puberty To Extend The Growth Period

url-4After all the research we have done, something that is often asked by readers is what can they do or what supplements can they take to at least delay the fact that their growth plates will close and not reopen.

The most common reply by some of the regular readers is to use compounds that can inhibit or block the effects of estrogen. They are called the aromatase inhibitors. The two most common types that is talked about and tried out to cause growth plate closure delay are Letrozole and Anavar. Both prevent the ability of estrogen from reaching the right type of receptor. The effect that estrogen seems to have on the growth plate is that it increases the rate at which the number of chondrocytes in the resting zone of the epiphyseal plate to be used up.

From the studies “Depletion of resting zone chondrocytes during growth plate senescence.” and “Effects of estrogen on growth plate senescence and epiphyseal fusion.”

The conclusion from the 1st study is…

Our findings support the hypotheses that growth plate senescence is caused by qualitative and quantitative depletion of stem-like cells in the resting zone and that growth-inhibiting conditions, such as glucocorticoid excess, slow senescence by slowing resting zone chondrocyte proliferation and slowing the numerical depletion of these cells, thereby conserving the proliferative capacity of the growth plate. We speculate that estrogen might accelerate senescence by a proliferation-independent mechanism, or by increasing the loss of proliferative capacity per cell cycle.

The conclusion that is stated very clearly by the 2nd study was…

“Our data suggest that (i) epiphyseal fusion is triggered when the proliferative potential of growth plate chondrocytes is exhausted; and (ii) estrogen does not induce growth plate ossification directly; instead, estrogen accelerates the programmed senescence of the growth plate, thus causing earlier proliferative exhaustion and consequently earlier fusion.”

The growth plate goes through senescence or the act/ process of aging. From aging its activity and ability to proliferate decreases. The stem like cells in the resting zone gets used up. It seems seems from the 1st study that one idea is to increase the amount of glucocorticoid to both inhibit growth and slow down resting zone chondrocyte proliferation and the number of chondrocytes being depleted. Both of the studies suggest that estrogen acts in an indirect pathway to drop the number of chondrocytes available to be proliferated by increased the rate the chondrocytes get depleted.

In some studies it is suggested that the compounds like gluccocorticoids and dexamethasone can slow down growth plate senescence, but there have been other studies which show that gluccocorticoids and deamethasone also decrease chondrocyte proliferation, and actually decrease longitudinal lengthening like “Dexamethasone-induced growth inhibition of porcine growth plate chondrocytes is accompanied by changes in levels of IGF axis components.andGrowth retardation induced by dexamethasone is associated with increased apoptosis of the growth plate chondrocytes

This shows that dexamethasone may be able to slow down the rate of senescence but it might also contribute to it since it increases the rate of apoptosis of growth plate chondrocytes. If there was any ideas on using glucocorticoids or dexamethasone to delay puberty, I personally would not suggest it since what you are hoping for is that from the delaying and inihbition of the depletion of the chondrocytes, you will get “catch-up growth” afters to compensate for the longitudinal growth that was not achieved in the earlier stages when the chondrocytes could have been proliferating and going through senescence in a normal rate.

The other option I’ve seen to prevent the penetration through the perichondrium that is protecting the epiphyseal hyaline cartilage from the blood vessels and the vascularization and ossification of all the cartilage has been a compound that has not be looked at by other height increase researchers yet, Chondromodulin, types I and type II. Technically the Chondromodulin protein is termed a Cartilage- derived angiogenesis inhibitor.

I wrote two posts before on the possibility on using this compound to help keep the cartilage from being infiltrated and calsified too quickly with “Increase Height And Grow Taller Using Chondromodulin-1” and “Further Analysis On The Possibility Of Using Chondromodulin-I and Chondromodulin-II To Increase Height

From my reading on the periosteum and the perichondrium it seems to suggest that the perichondrium might turn into the periosteum in a sort of differentiation pathway or process. Both of them surround a specific type of tissue. The perichondrium is wrapped around developing hyaline cartilage specifically to prevent vascularization. Cartilage themselves get the neccesary nutrients not through blood vessels but diffusion. From vascularization, the hyaline cartilage will lead to calcification. This would imply that if we can get more chondromodulin into our system, specifically to protect the hyaline cartilage from being penetrated, we might be able to extend the lifespan of the growth plates slightly longer.

This may be just a repost of the previous post but I wanted to cite 3 PubMed studies again, all showing that Chondromodulin Type I has chondroprotective properties.

PubMed study #1

“Our findings indicate that the antiangiogenic factor chondromodulin 1 stabilizes the chondrocyte phenotype by supporting chondrogenesis but inhibiting chondrocyte hypertrophy and endochondral ossification.”

PubMed study #2

“Cartilage-generated matrix components chondromodulin-I (ChM-I) synergistically stimulates growth and differentiation of chondrocytes in the presence or absence of FGF-2. In contrast, ChM-I inhibits the proliferation of vascular endothelial cells and tube formation, thereby further stimulating cartilage growth and inhibiting replacing cartilage by bone in an early stage. Another cartilage-derived chondromodulin-II (ChM-II) also stimulates cartilage growth. However, ChM-II does not inhibit vascularization but stimulates osteoclast differentiation.”

PubMed study #3

“Chondromodulin 1 stabilizes the chondrocyte phenotype and inhibits endochondral ossification of porcine cartilage repair tissue.”

CONCLUSION:

Our findings indicate that the antiangiogenic factor chondromodulin 1 stabilizes the chondrocyte phenotype by supporting chondrogenesis but inhibiting chondrocyte hypertrophy and endochondral ossification.

Interpretation:

Technically Chondromodulin Type I does disrupt the bone formation process. The researchers state that it gets in the way of endochondral ossification, which is what actually causes longitudinal growth on long bones. We would find out from the study “What makes the permanent articular cartilage permanent?” that is is Chondromoduline Type I which keeps the articular cartilage relatively permanent unlike the epiphyseal cartilage. What I have always proposed is that as long as there is some type of cartilage, preferrably hyaline cartilage with its uniform collagenous fibers we can more easily cause longitudinal growth. I am willing to propose that some of the results we find from LSJL is from the fact that the individual might have had some cartilage in their long bone to begin with, whether epiphyseal or other which meant that from just a mechanical point of view, they could have just squeezed the bone laterally and then would have made the relatively flexible elastic cartilage push out and lengthen causing longitudinal growth.

At this point, I would say that the best way to delay the end of puberty and extend the growth period is to get some injections of chondromodulin close to the growth plate regions in the body. This would cause the cartilage to not ossify as quickly. It would give more time to use certain techniques like PEMF or LSJL to gain some extra height and growth.

Tyler-Here’s another study

The effects of delayed puberty on the growth plate.

“Many athletes are beginning intense training before puberty, a time of increased bone accrual when up to 25% of total bone mineral accrual occurs. Female athletes experiencing late or delayed pubertal onset may have open epiphyseal plates open that are vulnerable to injury. This investigation’s purpose was to determine whether a delay in puberty (primary amenorrhea) affects the growth plate immediately post-puberty and at maturity.

Forty-eight female Sprague–Dawley rats (23days-of-age) were randomly assigned to four groups (n=12); short-term control (C-ST), long-term control (C-LT), short-term GnRH antagonist (G-ST) and long-term GnRH antagonist (G-LT). At 25days-of-age, daily gonadotropin-releasing hormone antagonist (GnRH-a; Cetrotide™, Serono, Inc.) injections were administered delaying pubertal onset. Left tibias were analyzed. Stained frontal slices of proximal tibia (5 μm thick) were analyzed in hypertrophic, proliferative and reserve zones for total height, zone height, and cell/ column counts. All procedures were approved by (IACUC) at Brooklyn College.

Growth plate height was 19.7% wider in delayed puberty (G-ST) group and at maturity was 27.9% greater in G-LT group compared to control (C-LT) (p<0.05){but increased growth plate height does not always result in larger final height}. No significant differences were found in short or long-term growth plate zone heights or cell/column counts between groups (p>0.05). Growth plate zone height normalized to total height resulted in 28.7 % larger reserve zone in the short term GnRH-a group (G-ST) but the proliferative zone was 8.5 % larger in the long-term group compared to the control group (p<0.05). Normalized to growth plate height a significant decrease was found in column counts in proliferative zones of the short and long-term GnRH-a groups.

Current data illustrates delayed puberty using GnRH-a injections results in significant growth plate height and decreases proliferative column counts and zone height\potentially contributing to decreases in bone mass at maturity.”

“Data from this study also showed delayed pubertal onset in the GnRH-a groups resulted in increased height but bone bridging would indicate that even though statistically growth plates are at different heights senescence was near with growth plate fusion at a time point similar to control animals.”

PKDCC

Xenopus Pkdcc1 and Pkdcc2 Are Two New Tyrosine Kinases Involved in the Regulation of JNK Dependent Wnt/PCP Signaling Pathway

Protein Kinase Domain Containing, Cytoplasmic (PKDCC) is a protein kinase which has been implicated in longitudinal bone growth through regulation of chondrocytes formation. Nevertheless, the mechanism by which this occurs remains unknown. Here, we identified two new members of the PKDCC family, Pkdcc1 and Pkdcc2 from Xenopus laevis. Interestingly, our knockdown experiments revealed that these two proteins are both involved on blastopore and neural tube closure during gastrula and neurula stages, respectively. In vertebrates, tissue polarity and cell movement observed during gastrulation and neural tube closure are controlled by Wnt/Planar Cell Polarity (PCP) molecular pathway. Our results showed that Pkdcc1 and Pkdcc2 promote the recruitment of Dvl to the plasma membrane. But surprisingly, they revealed different roles in the induction of a luciferase reporter under the control of Atf2 promoter. While Pkdcc1 induces Atf2 expression, Pkdcc2 does not, and furthermore inhibits its normal induction by Wnt11 and Wnt5a. Altogether our data show, for the first time, that members of the PKDCC family are involved in the regulation of JNK dependent Wnt/PCP signaling pathway.”

” both Pkdcc and Gli3 [may] cooperate on the regulation of long bone formation by modulating the temporal kinetics of columnar and hypertrophic chondrocyte domains establishment”

“Pkdcc could also modulate Wnt signaling, since inactivation of Wnt5a also alters the transition between proliferating to hypertrophic chondrocytes. Since Pkdcc regulates protein export from Golgi, its inactivation may directly interfere with either the secretion of the relevant signals or cell-surface localization of receptors ”

“Cell movements are essential for the correct shape of body axis and organ formation during embryo development. These morphogenetic cell movements are not stochastic, they undergo extensive control by distinct signal transduction pathways. One of this pathways is Wnt/Planar Cell Polarity (PCP) signaling pathway that, for example, in polarised tissue, coordinate the morphogenetic processes of the cells in the epithelial sheets plane. A set of core proteins was identified to be involved in PCP pathway, in both vertebrates and invertebrates. In vertebrates this group include the transmembrane receptor Fizzled (Fz), the cytoplasmic molecules Dishevelled (Dvl), Diego (Dgo) and Prickle (Pk), the transmembranar protein VanGogh/Strabismus (Vang/Stbm) and the cadherin-like protein Flamingo/Celsr1 (Fmg/Clsr1). These core PCP components were identified as genes whose inactivation leads to cell polarity mis-alignment. The PCP is involved in the coordination of cells within a tissue sheet, either by direct cell-cell interaction or under the influence of a diffusible ligand-based signalling system [9]. This occurs because these proteins localize in different regions inside the cell: Fz, Dvl and Dgo are localized in the proximal region, Vangl2 and Pk in distal region and Clsr1 localize in both distal and proximal regions, which is essential for the proper establishment of polarization”

“Pkdcc1 and Pkdcc2 promote recruitment of Dishevelled to the plasma membrane through DEP domain”

” Pkdcc1 alone is able to induce the expression of Atf2-luc, and the activation of non-canonical Wnt signaling. Curiously and contrary, Pkdcc2 is not able to activate Atf2 expression, inhibiting the normal activation of JNK dependent non-canonical Wnt downstream of Wnt11 or Wnt5a”

The Wolff’s Law On Bone Remodeling And Transformation, Part II

Sorry for the recent absence of new posts for the last 3-4 days. I took an extended break from the website but I am back now and will be getting more posts out. This post is the 2nd of two major articles I will use to talk about the Wolff’s Law of bone remodeling.

This article I have found and will post below shows that the old law of Wolff is not as scientifically valid as I thought. The article was written for the  AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY in 2006.

The critical thing about this article as that it completely objectively looks at how correct Wolff’s Law applies to actual bone loading. The hind leg loading of rodents are looked at again and we can see that this article can be used to analyze the feasibility and effectiveness of the Lateral Synovial Joint Loading technique.

The Link to the Document HERE. As always, I will highlight the most important parts of the article.


Who’s Afraid of the Big Bad Wolff?: ‘‘Wolff’s Law’’ and Bone Functional Adaptation

Christopher Ruff,1* Brigitte Holt,2 and Erik Trinkaus3

1Center for Functional Anatomy and Evolution, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205
2Department of Anthropology, University of Massachusetts, Amherst, Massachusetts 01003 3Department of Anthropology, Washington University, St. Louis, Missouri 63130-4899

page1image32392

ABSTRACT ‘‘Wolff’s law’’ is a concept that has some- times been misrepresented, and frequently misunder- stood, in the anthropological literature. Although it was originally formulated in a strict mathematical sense that has since been discredited, the more general concept of ‘‘bone functional adaptation’’ to mechanical loading (a designation that should probably replace ‘‘Wolff’s law’’) is supported by much experimental and observational data. Objections raised to earlier studies of bone functional adaptation have largely been addressed by more recent and better-controlled studies. While the bone morpholog- ical response to mechanical strains is reduced in adults relative to juveniles, claims that adult morphology reflects only juvenile loadings are greatly exaggerated. Similarly, while there are important genetic influences on bone development and on the nature of bone’s response to mechanical loading, variations in loadings themselves are equally if not more important in deter- mining variations in morphology, especially in compari- sons between closely related individuals or species. The correspondence between bone strain patterns and bone structure is variable, depending on skeletal location and the general mechanical environment (e.g., distal vs. proximal limb elements, cursorial vs. noncursorial ani- mals), so that mechanical/behavioral inferences based on structure alone should be limited to corresponding skele- tal regions and animals with similar basic mechanical designs. Within such comparisons, traditional geometric parameters (such as second moments of area and section moduli) still give the best available estimates of in vivo mechanical competence. Thus, when employed with appropriate caution, these features may be used to reconstruct mechanical loadings and behavioral differ- ences within and between past populations.

The idea that bone form reflects in some way its mechanical loading history during life is fundamental to many paleontological and bioarchaeological studies of skeletal material. While physical context and material culture give clues to past behavior, analysis of the skele- tons themselves is the most direct way to reconstruct individual behavior, and to explore intra- and interpopu- lational differences in behavior (e.g., Larsen, 1997; Drucker and Henry-Gambier, 2005; Scott et al., 2005). Reconstructions of body size and shape from skeletal remains are also dependent to some degree on assumed relationships between mechanical loadings and bone morphology (Ruff, 1995, 2003; Delson et al., 2000; Auer- bach and Ruff, 2004). The phenomenon of bone adapta- tion to imposed mechanical loadings is often loosely referred to as ‘‘Wolff’s law,’’ although as noted by others (Bertram and Swartz, 1991; Cowin, 2001b; Pearson and Lieberman, 2004; and see below), there are problems with this representation. Regardless of semantic issues, the general concept that bone adapts to its mechanical environment during life, and therefore that differences in morphology can be used to investigate differences in past mechanical environments, is widely accepted among paleoanthropologists and bioarchaeologists.

Several recent studies, however, beginning with the often-cited review by Bertram and Swartz (1991), called into question at least portions of ‘‘Wolff’s law’’ as it is generally understood (Forwood and Burr, 1993; Demes et al., 1998, 2001; Lovejoy et al., 2002, 2003; Ohman and Lovejoy, 2003; Lieberman et al., 2004; Pearson and Lie- berman, 2004). A number of issues have been raised, including the precise meaning of the ‘‘law,’’ the validity of the experimental evidence for bone functional adapta- tion, correspondence between in vivo strain measure- ments and bone structure, the genetic vs. environmental determinants of bone form, age dependency of bone func- tional response to loading, and whether skeletal mor- phology is mechanically ‘‘ideal.’’ Because the general con- cept of bone functional adaptation is so pervasive in bio- logical anthropology and indeed biology (Roesler, 1987), it is important to carefully evaluate these issues/objec- tions and their implications for current research approaches. We do so here, and attempt to clarify both the limits and potential of bone structural analyses. The emphasis here is on cortical bone distribution in long bone diaphyses, in part because most of the above stud- ies also focused on this aspect of skeletal form, and because this has been a very active area of anthropologi- cal research over the past several decades (e.g., Endo and Kimura, 1970; Kimura, 1971; Lovejoy et al., 1976; Jungers and Minns, 1979; Lovejoy and Trinkaus, 1980; Ruff and Hayes, 1983; Schaffler et al., 1985; Trinkaus and Ruff, 1989; Demes and Jungers, 1993; Ruff et al., 1993; Runestad, 1997; Trinkaus et al., 1999; Stock and Pfeiffer, 2001; Holt, 2003; Weiss, 2003; Beauval et al., 2005; Carlson, 2005). This list is not exhaustive; in fact, we make no attempt here to provide an encyclopedic review of recent literature on this general topic, which is voluminous (e.g., Martin et al., 1998; Cowin, 2001a; Pearson and Lieberman, 2004). Rather, we confine our- selves to key works that are specifically relevant to addressing the issues posed above and that provide his- torical context for the ideas of concern.

‘‘WOLFF’S LAW’’ VS. BONE FUNCTIONAL ADAPTATION

As noted by Cowin (2001b, p. 30–31), current usage of the term ‘‘Wolff’s law’’ usually involves only the general concept that ‘‘over time, the mechanical load applied to living bone influences the structure of bone tissue.’’ How- ever, he went on to point out that Wolff actually had something much more specific in mind, namely the for- mulation of strict mathematical rules governing this process, particularly with respect to the development of trabecular orientation in long bones (the ‘‘trajectorial theory’’), most famously expressed in the proximal femur. Wolff himself nicely summarized this argument in the introduction to his 1892 treatise (Wolff, 1892; translation in Wolff, 1986, p. 1): ‘‘Thus the law of bone remodeling is the law according to which alterations of the internal architecture clearly observed and following mathematical rules, as well as secondary alterations of the external form of the bone following the same mathematical rules, occur as a consequence of primary changes in the shape and stressing or in the stressing of the bones.’’

Many authors criticized Wolff ’s mathematical treat- ment of bone modeling/remodeling, which involved both engineering and biological misconceptions (for historical reviews, see Roesler, 1981, 1987; Martin et al., 1998; Cowin, 2001b). The ‘‘false premise in Wolff’s law’’ dis- cussed by Cowin (2001b) involves modeling real bones as solid, homogeneous, and isotropic structures subjected to static applied loads, which is strictly incorrect. However, neither Cowin (2001b) nor any of the other recent authors who critiqued Wolff’s law denied the importance of mechanical loading in the development of bone form, i.e., the more ‘‘general’’ version. This is an important point, since the two versions have sometimes been confused. For example, the critique by Cowin (2001b) (of the strict version) was cited by Currey (2002, p. 159) (again with reference to the strict version), who in turn was quoted by Ohman and Lovejoy (2003) in their more general critique of Wolff’s law. This confounding of the more general with the more specific version of the ‘‘law’’ unnecessarily con- fuses the issue: like many others, neither Cowin (2001b) nor Currey (2002) intended their critiques to imply a nega- tion of the general version; both authors have, in fact, spent most of their careers refining our knowledge of mechanically adaptive mechanisms in bone.

Given this potential confusion, it may be better to sim- ply discard the term ‘‘Wolff’s law’’ in its more general sense, as recommended recently by several authors (Martin et al., 1998; Cowin, 2001b; Pearson and Lieber- man, 2004). Following the original lead of Roux (1881), taken up by more recent investigators (Churches and Howlett, 1982; Cowin et al., 1985; Lanyon and Rubin, 1985), the term ‘‘bone functional adaptation’’ seems appropriate for this more general meaning. As summar- ized by Roesler (1981), the writings of Roux (1881) incor- porated two important principles: 1) organisms possess the ability to adapt their structure to new living condi- tions, and 2) bone cells are capable of responding to local mechanical stresses. Although not without their own problems (Roesler, 1981), the ideas of Roux (1881) encap- sulate much of the more general concept of bone func- tional adaptation as understood today. (In fact, as noted by Cowin (2001b), some researchers suggest renaming the more general version of Wolff’s law ‘‘Roux’s law.’’)

Figure 1 is a schematic diagram, taken from Lanyon (1982), of perhaps the simplest representation of bone functional adaptation in a more modern sense (see also Lanyon and Skerry, 2001). The bone modeling/remodel- ing stimulus is based on strain (not stress)—the actual physical deformation of the bone tissue—and acts through feedback loops. Increased strain (e.g., through an increase in activity level) leads to deposition of more bone tissue, which then reduces strain to its original ‘‘optimum customary level.’’ Decreased strain (e.g., through inactivity) leads to resorption of bone tissue, which again restores the original strain levels. Many other authors have embraced the general idea of a ‘‘cus- tomary’’ or ‘‘equilibrium’’ strain level window above which bone deposition is stimulated and below which resorption is stimulated (e.g., Carter, 1984; Frost, 1987; Turner, 1998), although there are many qualifications to and variations on this general model. One of the most important qualifications is that the ‘‘customary strain level’’ to which bone tissue is adapted is apparently not constant, but varies by skeletal location (Carter, 1984; Hsieh et al., 2001; Lanyon and Skerry, 2001; Lieberman et al., 2001; Currey, 2002) as well as by systemic factors such as age, disease state, hormonal status, and genetic background (Frost, 1987; Lee et al., 2003; Pearson and Lieberman, 2004; Suuriniemi et al., 2004). Also, the type of strain (its frequency and other characteristics), as well as the loading history of the bone cells, are important variables influencing the magnitude of bone response (Turner, 1998; Burr et al., 2002).

These complexities suggest that the general model shown in Figure 1 must be interpreted carefully and within specific contexts (e.g., comparisons between simi- lar skeletal regions in genetically similar animals), and that disentangling the effects of different loading compo- nents, such as load magnitude vs. frequency, may be very difficult from morphology alone. But increased com- plexity does not invalidate application of the general model, which is supported by much experimental evi- dence, reviewed below, with suitable caution (Lanyon and Skerry, 2001). In any event, arguments regarding the validity of the ‘‘strict’’ version of Wolff’s law must be distinguished from those concerning the nature of bone functional adaptation in general.

EXPERIMENTAL EVIDENCE FOR BONE FUNCTIONAL ADAPTATION

A series of now-classic papers from the 1960s through the 1980s appeared to provide clear evidence for bone functional adaptation to mechanical loading and unload- ing, using various experimental animal models (e.g., Saville and Smith, 1966; Hert et al., 1969; Liskova and Hert, 1971; Chamay and Tschantz, 1972; Uhthoff and Jaworski, 1978; Goodship et al., 1979; Jaworski et al., 1980a; Woo et al., 1981; Churches and Howlett, 1982; Lanyon et al., 1982; Lanyon and Rubin, 1984), as well as observations of human athletes (e.g., Nilsson and West- lin, 1971; Jones et al., 1977) (for a comprehensive review, see Meade, 1989). However, in their critique, Bertram and Swartz (1991, p. 23) argued that much of this evi- dence was inherently flawed because of problems in experimental design: ‘‘While accepting that mechanical load has substantial influence on the development of form in bone, we argue that to date there is no direct evidence of its influence on the healthy mature appen- dicular skeleton that is not seriously compromised by complications arising from indirect effects of the investi- gative procedures on other aspects of the organism’s physiology.’’ The ‘‘complications’’ that Bertram and Swartz (1991) referred to involve inflammatory re- sponses due to surgical treatment, and repair phenom- ena (regeneration of injured tissue, or repair of stress fractures), none of which they considered to properly fall under ‘‘Wolff’s law.’’ With regard to the first of these fac- tors, it should first be noted that the studies above that included surgical intervention generally included surgi- cal controls, i.e., bones in which all surgical procedures except the change in mechanical loading had been car- ried out, although as Bertram and Swartz (1991) and others pointed out, it is still difficult to completely con- trol for all related effects. There is also some question as to whether woven bone, a typical response (at least at first) to sudden mechanical overload, is ‘‘normal’’ or ‘‘pathological’’ (see also Frost, 1988). However, with regard to this latter issue, Burr et al. (1989, p. 232), in a carefully controlled experiment that intentionally incor- porated some features of earlier experimental work, demonstrated that ‘‘woven bone can be a normal adap- tive response to an intense mechanical challenge, even in the absence of trauma or fatigue-induced damage.’’

Partly in response to such criticisms, a series of inves- tigators beginning in the early 1990s developed new ani- mal models that did not involve invasive surgical proce- dures (Turner et al., 1991; Torrance et al., 1994; Forwood et al., 1998). These models have since been used exten- sively to study various aspects of bone modeling/remodel- ing under altered mechanical loadings (e.g., Hsieh et al., 2001; Burr et al., 2002; Robling et al., 2002).

shows the results of one such experiment (Robling et al., 2002). In this experiment, the forearms of 6-month-old rats were dynamically loaded in compression, which cre- ates bending stresses in the midproximal region of the bone due to its natural curvature. At this age, the rats can be considered ‘‘adults,’’ since no further growth in bone length occurred over the 16-week experimental period. The extra loading produced an increase of 70– 100% in bending rigidity (second moment of area) in the plane of bending compared to control limbs, through increased periosteal bone apposition in regions under the highest bone strain (Fig. 2). Bone strength determined through direct mechanical testing after sacrifice in- creased 64–165%, depending on the loading schedule and strength parameter. Interestingly, these gains were not well-represented by changes in bone mineral content (BMC) or bone mineral density (BMD), the two most commonly measured outcomes of human exercise stud- ies. Conversely, changes in the relevant second moment of area (a geometric property) explained 92% of the var- iance in strength (ultimate force). Similar results were obtained by Warden et al. (2005), who also demonstrated greatly increased fatigue resistance after experimentally increased loading in the same animal model. These stud- ies noninvasively reproduced and extended similar results of earlier studies (e.g., Lanyon et al., 1982) demonstrat- ing the specificity of bone adaptation to changes in strain distributions, and also the primacy of geometric changes in such adaptations (Woo et al., 1981).

Bertram and Swartz (1991) also argued that much of the change in bone dimensions observed in human ath- letes, and commonly attributed to increased mechanical loading, was actually a repair process in response to ‘‘chronic fatigue damage,’’ and as such did not qualify as support for ‘‘Wolff’s law.’’ It could be debated whether repair of fatigue-induced microcracks is actually outside the realm of ‘‘normal’’ bone mechanical adaptation, since such repair has been hypothesized to be an important component of bone remodeling throughout life (Martin et al., 1998). Other more recent studies of human athletes and volunteers in exercise intervention studies also showed clear evidence of adaptive bone modeling/ remodeling without evidence of fatigue (or stress) frac- tures, as reviewed below. The varying response of bone to applied loading is probably best viewed as a contin- uum, involving in some cases rapid deposition of woven bone (which can subsequently be remodeled into lamellar bone), in some cases repair of microcracks, and in other cases direct deposition of lamellar bone, depending on the severity and suddenness with which the loading schedule is implemented (Rubin et al., 1995). Also, not all bone fea- tures may react similarly to applied loading: in Robling et al. (2002), the distal ulnar articulations of experimentally loaded groups developed osteophytic reactions, which is perhaps not surprising given the very ‘‘abnormal’’ way in which the carpus was loaded (Fig. 2), and possible con- straints on articular remodeling (Ruff et al., 1991; Lieber- man et al., 2001) (what this might indicate regarding mechanisms underlying osteoarthritis was not addressed by the authors). However, the response in the diaphysis was ‘‘normal’’ in terms of bone tissue appearance (Fig. 4 in Turner and Robling, 2004). In summary, while experi- mental and observational studies have their limitations, such studies have clearly demonstrated that functionally adaptive changes in bone structure can be brought about by manipulation of mechanical loadings, supporting the general model shown in Figure 1.

IN VIVO STRAINS AND FUNCTIONAL ADAPTATION

Given that bone adaptation to mechanical loadings very likely involves a response to strains (deformations) engendered by such loadings, direct measurement of bone strains in vivo using strain gauges can provide important information in evaluating adaptive mecha- nisms (e.g., Fig. 2B, although in this case, strains were calculated in a simulated in vivo loading) (Robling et al., 2002). Three recent studies documented in vivo strains in the long bones of macaques (Demes et al., 1998, 2001) and sheep (Lieberman et al., 2004), and concluded that strain patterns were not well-correlated with cross-sec- tional geometry of the bones, thereby casting doubt on whether cross-sectional geometry could be used to recon- struct mechanical loading history. Specifically, the bend- ing axes generally did not match well with the neutral axes of sections, or conversely, sections were not rein- forced in regions of maximum strain (Lieberman et al., 2004) made a number of other points, which are ad- dressed below). It should be noted that none of these studies examined the effects of exercise per se on bone modeling/remodeling, but rather the normal patterns of strain during locomotion in laboratory animals.

The fact that long bone diaphyses may be customarily bent in planes that are not equivalent to their directions of greatest bending rigidity or strength was noted previ- ously (Lanyon and Rubin, 1985). Together with the observation that long bone curvature often seems to increase rather than decrease strains in vivo, this formed the basis for theories that bone structure may be designed in some cases to confine strains to more pre- dictable patterns, rather than strictly to minimize strains (Lanyon and Rubin, 1985; Bertram and Biewener, 1988). This is not inconsistent with the model shown in Figure 1: some degree of bending could actually be beneficial to bone tissue by maintaining strains within the ‘‘optimum customary’’ window (Lanyon and Rubin, 1985). At the same time, potentially catastrophic strains in ‘‘unusual’’ orientations could be avoided. Because of their more read- ily available surfaces for attaching strain gauges, the dis- tal limb elements of cursorial animals (horses, sheep, and dogs) were most often used in these experiments (see also Lieberman et al., 2004). These skeletal locations are rela- tively ‘‘unprotected’’ medially and laterally by muscle ten- dons (one reason that they are more accessible for strain gauges) (e.g., see Piotrowski et al., 1983; Thomason, 1985). Thus, any unusual bending in the mediolateral plane (e.g., due to turning or walking over uneven ground) is probably less able to be modified by muscles, making this a more ‘‘dangerous’’ loading orientation for the bones. In these situations, it is not unreasonable to postulate a genetically selected difference in strain sensitivity thresh- olds that would favor the development of an elliptical cross section oriented to increase mediolateral (M-L) bending strength (Lanyon et al., 1982; Piotrowski et al., 1983; Nunamaker et al., 1989; Lieberman et al., 2004).

Of course, postulating genetic mechanisms that alter the ‘‘optimum customary’’ strain sensitivity of bone tis- sue argues against making comparisons between species that are not closely related, and for whom genetic selec- tion histories may have been significantly different: one would not want to use differences in cross-sectional shape between a human and horse long bone to recon- struct behavioral differences between them! In this respect, we fully concur with the caution by Demes et al. (2001, p. 264) ‘‘against broad behavioral conclusions derived from long bone cross-sectional shape.’’ However, comparisons within species or between closely related species who share the same basic body design and evolu- tionary history are much less likely to be confounded by such factors (see also Lieberman et al., 2004). In this regard, it is interesting that even in highly cursorial ani- mals, activity patterns appear to affect long bone cross- sectional geometry in predictable ways. Thoroughbred and standard-bred horses differ in cross-sectional geome- try of the third metacarpal (cannon bone), such that thoroughbreds, who are subjected to more rigorous train- ing of a type that specifically engenders high strains in the anteroposterior (A-P) plane, have more A-P strength- ened bones (Nunamaker et al., 1990). McCarthy and Jeffcoat (1992, p. 35), in an experimental study of young (yearling) thoroughbreds, also documented a site-specific effect of exercise in these animals: ‘‘In the unexercised group periosteal bone apposition occurred uniformly around the third metacarpal without selective enlarge- ment of any cortex. The increased thickness of the dorsal cortex in the exercised horses means that the bone is bet- ter able to withstand loading of this cortex where very high compressive strains can occur during locomotion.’’2 These results are consistent with the view that there is a basic structural model, in part genetically determined, of the horse third metacarpal that can then be modified by specific environmental (mechanical) stimuli (for a very similar argument, see Turner, 1998). This is very much analogous to comparisons of the same skeletal element within or between human populations with different behavioral characteristics (e.g., Ruff, 1987; Stock and Pfeiffer, 2001): because the basic underlying model is sim- ilar, variations in morphology are more likely to reflect variations in applied loading throughout life.

The above reasoning also argues for caution in extrap- olating results of strain gauge experiments between skel- etal locations or species with very different body plans and evolutionary histories. There is evidence that strain distributions in bones/species that are less specialized for cursorial locomotion more closely match traditional expectations of greater bone strength in directions of higher strain, especially during vigorous movement. Fig- ure 3 shows some of the results of Demes et al. (2001) on strains in the macaque tibial mid-diaphysis during walk- ing and galloping, and of Szivek et al. (1992) on strains in the greyhound femoral mid-diaphysis at various speeds (although the greyhound is certainly well-adapted for cursorial locomotion, its femur is surrounded by muscles in much the same way as a noncursorial ani- mal). In both cases, anterior and posterior strains in- creased with increasing speed. In the macaque tibia, the bending axis (the axis around which the bone is bent) during galloping moved to within 198 of the M-L axis, and to within about 138 of the neutral axis of the section (the axis about which bending rigidity is greatest) (Fig. 3A). That is, the greatest strains during galloping were experienced in almost the same direction as that of maxi- mum bending rigidity. In the greyhound femur, the bend- ing axis similarly rotated to a more M-L orientation (228 from the M-L axis) as speed increased from 0.61 to 2.44 m/sec, the former a slow walk and the latter a trot (Rubin and Lanyon, 1982) (Fig. 3B). While the neutral axes of sections were not calculated in this study, Szivek et al. (1992, p. 105–106) noted that during running, ‘‘the peak strain regions shifted to the anterior and posterior aspects of the bone… The shape of the cross section of the grey- hound femur at the mid-diaphysis (i.e., oblong) may be a result of this strain distribution while the dog performs.’’ Carter et al. (1981; see their Fig. 6) obtained very similar results for a mixed-breed dog moving at a speed between the two higher speeds shown in Figure 3B.

It should also be noted that in both of the studies depicted in Figure 3, the magnitude of maximum strain increased substantially in moving from a walk to a trot or gallop, as would be expected (Rubin and Lanyon, 1982). Because the stimulus for bone functional adaptation is dependent on strain rate, which in turn is dependent on strain magnitude and frequency (Turner, 1998), it is likely that more dynamic activities are far more osteogenic than slow walking (although small strains may also be osteo- genic; see Fritton et al., 2000). As observed by Rubin and Lanyon (1982, p. 206) in their now-classic review of in vivo strain gauge results, ‘‘The association which natu- rally exists between high peak strains and high strain rates will therefore result in bone architecture being pref- erentially influenced by the strains encountered during periods of vigorous, rather than more sedentary, activity’’ (see also Mikic and Carter, 1995). This is closely related to the ‘‘cellular accommodation’’ theory of Turner (1999), whereby bone cells are only stimulated by more ‘‘unusual’’ loadings. It can be presumed that galloping or trotting in the macaques and dogs included in Demes et al. (2001) and Szivek et al. (1992) was a relatively unusual, although certainly not unknown, activity compared to walking. The fact that the cross-sectional shape of both bones better corresponded with strains engendered during running may be a product of the higher strains produced by this more unusual, but still ‘‘characteristic’’ loading. This suggests that bone structure is correlated with activ- ity, and primarily vigorous activity.

In the other study by Demes et al. (1998; see also Demes et al., 2001), maximum strains in the macaque ulnar midshaft were always located closer to the medial and lateral cortices, regardless of speed of locomotion (although the location of peak strain moved slightly toward the anterior and posterior cortices during gallop- ing), while the bones were slightly stronger in the A-P direction. This would seem contrary to the scenario pre- sented above. However, unlike the tibia, the macaque ulna is part of a more ‘‘multifunctional’’ (Schaffler et al., 1985) forelimb complex that serves in a greater variety of roles, both locomotor and nonlocomotor, than do the hindlimb bones. Even during locomotion, the macaque forelimb experiences significant changes in applied load- ings, depending on substrate (Schmitt, 2003). Significant load-sharing with the radius, which is actually stronger than the ulna in cercopithecoids (Ruff, 2002), further complicates interpretations. In many ways, then, the loading environment of the macaque tibia is probably simpler and more predictable than that of the ulna, with more stereotypical positioning of the limb, muscle recruitment, and resultant strain patterns (e.g., peak strains in the macaque ulna actually declined from walking to galloping (Demes et al., 1998), which has not been reported in studies of other bones/species). In terms of behavioral reconstructions, interpretations of forelimb bone cross-sectional shape will be similarly complex, although overall forelimb relative to hindlimb strength proportions are still informative regarding general loco- motor behavior (Stock and Pfeiffer, 2001; Ruff, 2002).

We should also remember that, in adults at least, strain gauges measure deformations in bones that have already adapted to mechanical loading. As noted above, if the most osteogenic strains are those that occur under vigorous loadings such as running, the bone will adapt by altering its geometry accordingly, following the gen- eral model in Figure 1. The strains developed during less vigorous (but more common) loadings such as walking would thus be, in effect, ‘‘residual’’ strains that are insuf- ficient to stimulate modeling/remodeling (Turner, 1999). This could lead to misinterpretations of strain gauge data in terms of in vivo loadings. For example, if large A-P bending loads of certain limb bones occur during running that create large strains on the anterior and posterior surfaces, which in turn stimulate bone deposi- tion on those surfaces, then during walking (where A-P bending loads are probably much smaller), anterior and posterior surface strains will be small, and medial and lateral surface strains relatively larger. This does not, however, indicate that bending loads (even in walking) are typically larger in the mediolateral direction. Thus, one must be careful in extrapolating from strains to loads.

The strain gauge study in sheep by Lieberman et al. (2004) addressed two other issues relevant to interpreta- tions of long bone cross-sectional geometry: does the axis of bending of a long bone pass through the section cen- troid, and does this axis remain in a similar position throughout locomotion (stance)? Both questions were answered in the negative. The first result is similar to that obtained by other researchers or implied by their results (e.g., Carter et al., 1981; Rubin and Lanyon, 1982; Szivek et al., 1992). Because of the superimposi- tion of axial compressive on bending loads in most long bones, overall compressive strains are higher than ten- sile strains; the axis of bending (0 strain) correspond- ingly shifts toward the tensile side, thereby no longer passing through the section centroid (Fig. 3B; see also Fig. 2 in Lieberman et al., 2004). This is significant because geometric section properties that reflect bending rigidity and strength (second moments of area and sec- tion moduli) are typically calculated around axes that pass through the section centroid (e.g., Ruff and Hayes, 1983; Sumner et al., 1985). Thus, rigidity or strength estimates based on such properties will be in error, by as much as 30–50% (Lieberman et al., 2004). It should be noted, first, that these results do not affect past interpre- tations of the pure bending rigidity/strength of long bones; second moments of area and section moduli, as traditionally calculated, are still valid representations of such properties. What is strictly invalid is the implied assumption that in vivo loadings are, in fact, pure bend- ing loads. In studies that can be used to directly assess this assumption in vivo, the degree of deviation of bend- ing axes from section centroids can be quite variable, even at the same skeletal location and within the same species: between different phases of the stance cycle, dif- ferent animals, right and left limbs of the same animal, and even in repeated trials of the same limb of the same animal (e.g., Figs. 2 and 5 in Szivek et al., 1992). The bending axis may shift from one side of the section cen- troid to the other, depending on these factors (Szivek et al., 1992; Demes et al., 2001). In other words, there is no consistent ‘‘correction’’ factor that can be incorporated into section property analyses to account for these varia- ble deviations.

This is closely related to the second of the results of Lieberman et al. (2004): because of changes in ground reaction forces, limb positioning, and muscle forces dur- ing locomotion, bending axes cannot remain in exactly the same position relative to the cross section. This is clearly implied by earlier studies, such as Carter (1978), based on work originally reported by Lanyon et al. (1975), of strains in the human tibia during walking and jogging, in which longitudinal strains on one surface of the cortex shifted between tensile and compressive within one gait cycle. Because in this study, strain data were collected on only one surface (anteromedial), strain distributions cannot be determined; however, these results necessitate a major shift in the bending axis across the tibial cortex during gait. This is perhaps not surprising, given the variable position of the human tibia relative to the body’s center of gravity and chang- ing muscle actions during gait (Inman et al., 1981).

Thus, even with in vivo strain gauge data in hand, it is not possible to precisely define the position of the bending axis of a long bone section, because it varies constantly during use of the limb, both between and within individuals. Therefore, it is not possible to factor this into bone structural analyses, except perhaps in a general sense (Griffin and Richmond, 2005). In fact, it might be counterproductive to attempt to do so, at least quantitatively, because the particular choice of bending axis could bias results in unpredictable ways. Thus, it is probably advisable to continue to report section proper- ties (second moments of area and section moduli) relative to centroidal axes, with the understanding that these are only approximations of true bending rigidity and strength in vivo. In this respect, it is reassuring that Lieberman et al. (2004) obtained correlations of about 0.9 or better between section properties measured to centroidal axes and those measured to an average bending axis deter- mined experimentally. Also, since the main interest of many anthropological and paleontological studies is the relative importance of different types of mechanical load- ings, deviation of absolute estimated rigidities or strengths from actual values (even if such values were constant and could be determined) is of less concern, provided that the basic mechanical model is similar between the individuals being compared (see above).

Finally, as recognized by the investigators themselves, strains measured in laboratory animals moving on a treadmill at constant (and usually fairly low) speeds and in a straight line are not representative of the full range of variability present during normal activities (Lieberman et al., 2004), and only occasionally measure strains at the more important (see above) higher gait velocities. As noted by Dickinson et al. (2000, p. 105) in a wide-ranging review of animal locomotion, ‘‘In nature, unlike in the lab- oratory, straight-line, steady-speed locomotion is the exception rather than the rule.’’ Variable directionality of movement may explain, for example, why bones subjected primarily to A-P bending in typical treadmill exercises are still reinforced mediolaterally (Fig. 3, and see above). Mikic and Carter (1995, p. 465) were more explicit:

‘‘One difficulty that is encountered when using bone strain data in studies of functional adaptation is that reported data are often far from a complete record of strain over an experimental period. On the contrary, the reported results generally consist of a few average cyclic strain parameters that are extracted from a short period of recordings while an animal performs a very restricted task. Most inves- tigators agree, however, that a much more complete record of strain history is required to relate bone biology and morphology to strain. Such records should include the many diverse activities of the animal, including cage activity.’’

This is admittedly a very difficult task, and may not be totally achievable, even for animals in a controlled laboratory environment (but see Fritton et al., 2000, for example). Thus, researchers have turned to theoretical modeling approaches (extrapolated from available in vivo data) in an attempt to determine the influence of overall loading history on bone morphology (e.g., Carter, 1987; Beaupre et al., 1990; van der Meulen et al., 1993; Mikic and Carter, 1995). However, these observations empha- size some of the problems inherent in using data of this kind.

This is not to say that in vivo strain gauge studies cannot provide very valuable information: strain studies in animals have been critical in investigating general bone adaptive mechanisms (e.g., Fig. 2), and the few in vivo studies of strain in human (Lanyon et al., 1975; Burr et al., 1996; Aamodt et al., 1997; Carter, 1978) and nonhuman primate (Swartz et al., 1989; Demes et al., 1998, 2001) long bones helped clarify mechanical load- ings of these skeletal elements. Any morphological stud- ies should carefully consider such evidence and its impli- cations for reconstructing behavior. However, we also need to carefully consider the limitations of such data when applied to ‘‘real life’’ situations, i.e., the total load- ing history of a bone.

GENETIC DETERMINATION OF BONE MORPHOLOGY

The pace of discovery of new genetic mechanisms underlying bone growth and development has increased dramatically over the past several decades (for recent reviews in the anthropological literature, see Chiu and Hamrick, 2002; Lovejoy et al., 2003; Pearson and Lieber- man, 2004). Building on these discoveries, Lovejoy et al. (2002, 2003) argued forcefully for the importance of genetic mechanisms in the determination of bone mor- phology, and conversely, the relative insignificance of ‘‘mechanoanabolism,’’ or the functional adaptation of bone to perceived mechanical stimuli during life. These arguments tend to dichotomize genetic and environmen- tal effects: ‘‘The most relevant issue for anthropologists is the degree to which adult bone structure is indicative of genetic background versus its history of load trans- duction’’ (Lovejoy et al., 2003, p. 101), with genetic influ- ences argued to be paramount: ‘‘External bone morphol- ogy now appears to be largely dictated by an integrated system of sequentially expressed gene arrays’’ (Lovejoy et al., 2002, p. 99). While we fully agree that a better understanding of bone developmental genetics is impor- tant for explaining the evolution of skeletal morphologi- cal variation (e.g., Shubin et al., 1997; Hallgrimsson et al., 2002; Hamrick, 2003), we believe that this rather polarized view is counterproductive: because genetic mechanisms are important does not mean that direct environmental stimuli are not; in fact, in certain respects, the two may be inseparable (Martin et al., 1998, p. 270–271; also see below). As shown above, it is obvious that mechanical loading during life can have a strong effect on variation in bone morphology. Minimiz- ing the importance of mechanical effects artificially restricts the scope of inquiry, and hinders attempts to provide a complete explanation for this variation.

Another factor that must be carefully considered in this context is variability between different types of skel- etal features in the extent to which they are environ- mentally modifiable during life. For example, long bone articular size appears to be less affected by changes in mechanical loading than cross-sectional diaphyseal size (Ruff et al., 1991; Lieberman et al., 2001). The great majority of developmental genetic studies of the skeleton examined variation in gross morphological features (e.g., patterns of limb element organization); bone ‘‘size’’ fea- tures such as mass, volume, and length; or bone ‘‘den- sity’’ (usually not true tissue density). Heritability esti- mates for bone mineral content (BMC) and bone mineral density (BMD), the most commonly measured bone parameters, average about 60–70% in humans, but if covariation with body mass is accounted for, this falls to about 50% (for an excellent review, see Prentice, 2001). However, such skeletal traits do not provide estimates of mechanically relevant parameters (Sievanen et al., 1996; van der Meulen et al., 2001). Volkman et al. (2003, 2004) carried out a more relevant study in which they assessed genetic effects on cross-sectional geometric and other mechanical properties of the mouse femur, using quanti- tative trait loci (QTL) analysis. They found evidence for complex genetic control of these characteristics, but at a low level: genetic markers accounted for only 3–22% of trait variances. They discussed several possible path- ways through which genes may influence bone structure: 1) a direct influence on bone size and shape (i.e., directed activity of osteoblasts and osteoclasts); 2) an indirect effect on factors such as body weight, muscle strength, and activity level, which in turn alter mechanical load and thus bone structure; and 3) an effect on responsive- ness of bone to applied mechanical loading (i.e., ‘‘set points’’ in a ‘‘mechanostat’’-like mechanism; see Frost, 1987; Martin et al., 1998, p. 270–271). The interaction between genetic and environmental effects is prominent in the second two of these proposed mechanisms. Other investigators, in fact, demonstrated varying mechano- sensitivity in different mouse strains (Kodama et al., 2000; Robling and Turner, 2002).

Many other recent studies found evidence for some heritability of various bone structural traits, sometimes sex-linked (e.g., Peacock et al., 2005, and references therein). It is important to note that these studies do not provide estimates of actual genetic determination of traits, however (Prentice, 2001), and that as noted above, final adult morphology is likely to be a complex product of genetic-environmental interactions. A good example of this is the interaction between the gene(s) encoding for the estrogen receptor a (ER-a) and physical exercise: both the receptor and increased mechanical loading are necessary to increase bone mass (Lee et al., 2003; Suuriniemi et al., 2004). As Lanyon and Skerry (2001, p. 1938) pointed out, ‘‘although systemic influ- ences may modify mechanically adaptive processes, they cannot substitute for them,’’ i.e., regardless of genetic background, appropriate mechanical loading is necessary to develop normal adult form. This was demonstrated in experiments early in the last century in which bones iso- lated from mechanical loading during growth still devel- oped the general features of their normal counterparts, but not the specific morphological details (Murray, 1936). The major evolutionary features of skeletal morphology (e.g., what makes a horse skeleton different from a human skeleton) may be principally genetic, but what makes one horse (or one human) skeleton different from another is likely to be a product of both genetics and environment, with different skeletal features more or less environmentally modifiable. Thus, understanding both genetic and environmental influences is critical to understanding morphological variation.

AGE-DEPENDENCE OF BONE FUNCTIONAL ADAPTATION

Another issue discussed by Bertram and Swartz (1991) was the apparent age-specificity of bone response to changes in mechanical loading, particularly a reduction in mechanical loading (e.g., Jaworski et al., 1980b), which might argue against a universally applicable ‘‘Wolff’s law.’’ Bertram and Swartz (1991, p. 267) also noted a distinction between modification of growth pat- terns and functional adaptation in the ‘‘mature’’ skele- ton: ‘‘It appears to us that many of the adjustments of bone form associated with mechanical loads result from the interaction of load with the developmental/growth process, which in bone normally persists into young adulthood (mid-20’s or later in human studies).’’ It is important to recognize that the ‘‘growth’’ period, as they defined it here, includes early adulthood, extending be- yond adolescence as usually defined. This would corre- spond to the ‘‘positive’’ period of skeletal growth where bone mass is normally increasing, even though growth in bone length has largely ceased (Riggs and Melton, 1992).

Forwood and Burr (1993) reviewed earlier animal and human exercise studies across different age groups. They concluded that while the main function of mechanical loading in the adult skeleton was to conserve or main- tain existing bone, ‘‘exercise can, in fact, add small amounts to bone mass of the adult skeleton,’’ on the order of a few percent, although this adaptation ‘‘is mod- est when compared with that in growing bone’’ (Forwood and Burr, 1993, p. 100). They also noted that very inten- sive exercise can stunt growth in growing bones. Pearson and Lieberman (2004) reviewed the evidence for a reduc- tion in osteogenic potential on a cellular level with aging. It should be noted, however, that the most drastic reductions occur in aged or senescent adult cells, so this factor may not be as applicable to comparisons between juveniles and young adults.

With respect to anthropological reconstructions of past behavior, the key question raised by these studies is: to what extent is the morphology of adult bones indicative of the mechanical loading of these bones during adult- hood? This question can be subdivided into two related questions: first, can mechanical loading significantly change bone morphology after the childhood and adoles- cent years, and second, regardless of the answer to the first question, is adult bone morphology still informative with regard to adult loadings?

The first question can be answered in the affirmative, although it is also apparent that the bone response to mechanical stimuli is more marked in juveniles than in adults (Forwood and Burr, 1993; Turner et al., 2003). Part of the problem with evaluating bone sensitivity to mechanical loading in adults is that the response is probably slower than in juveniles; thus, long-term longi- tudinal studies may be necessary to clearly document effects. Many prospective exercise studies of human adults also have problems in study design, including nonrandomization of subjects, poor compliance, small samples, failure to control for other confounding effects, and failure to measure effects at the actual site of skele- tal loading (Kerr et al., 1996), all of which may contrib- ute to inconsistency of results across studies (Pearson and Lieberman, 2004). One of the best-controlled studies in older adults was by Kerr et al. (1996), who examined the effects of weight-training on BMD at several skeletal sites in postmenopausal women (mean age, 58 years at start of program) over a period of 1 year. The weight- training, three times per week, was carried out on either the right or left upper and lower limbs, with the opposite limb serving as an internal control, thus inherently accounting for systemic variables such as genetic influ- ences, diet, hormonal status, and body weight. Their results for the distal radius are shown in Figure 4. As with many such studies in adults, the effects were rela- tively small (an average gain of 2.4% on the exercised side, compared to a loss of 1.4% on the control side), but significant. Statistically significant positive effects of exer- cise were also seen in the hip region. Interestingly, exer- cises aimed at increasing strength (higher weights, lower number of repetitions) had a more positive effect than those aimed at increasing endurance (lower weights, more repetitions). This suggested that the magnitude of loading (or rate of change in magnitude of loading) was more important than the number of loading cycles, which agrees with some animal experimental results (see refer- ences above).

Another observation apparent in Figure 4 is that a shorter-term study of a few months would not have picked up the significant effect of the exercise treatment. This factor should be kept in mind when evaluating results of shorter-term exercise studies in humans, or in other large, relatively long-lived mammals. Because of variable exigencies such as subject compliance and reten- tion (in human studies) and caretaking expenses (in ani- mal studies), these investigations are commonly carried out over short periods relative to total lifespan. This also has more general implications regarding interpretations of mechanical loading effects in adults: response to load- ing may be slower than in juveniles, but the total adult period available for functional adaptation is longer; thus, cumulative effects (positive or negative) may be larger than would be predicted by short-term studies.

This point was emphasized in a recent long-term longi- tudinal study of young adult female soccer players (Valdimarsson et al., 2005). The subjects averaged 18 years of age at the beginning of the study and were fol- lowed for 8 years, a long time period compared to most prospective studies. Bone mineral content and density of the whole body and BMD at various anatomical locations were evaluated using dual X-ray absorptiometry (DXA) at baseline and at follow-up. A control group of non- athletes of similar age was also followed over the same time period. In players who remained active throughout the period, BMC and BMD increased, and differences from controls increased (from 4% greater at baseline to 9–12% greater at the end in the total body, and from 7% at baseline to 14% greater in the leg; our calculations). Players who had retired from play during the follow-up period began with higher values than controls, but then showed no further increase over controls. Valdimarsson et al. (2005, p. 910) noted that while ‘‘exercise in adult- hood has been described as at best conferring BMD ben- efits of a few percentage points . . . this notion is only  supported by short-term prospective controlled studies spanning, at best, 24 months.’’ The age period evaluated in this study can appropriately be termed ‘‘postadoles- cent’’ (there was no change in height among the active players), but it does fall within the early adult ‘‘develop- mental/growth’’ period as defined by Bertram and Swartz (1991) (see above).

The bilateral asymmetry model used by Kerr et al. (1996) was also exploited in a series of studies of the playing and nonplaying upper limbs of athletes in racket sports. In one such study (Kannus et al., 1995), the age- dependence of exercise effects was specifically addressed by comparing bilateral asymmetry in upper limb BMC of adult ‘‘national-level’’ female tennis and squash players who had started playing at ages varying from young childhood to young adulthood. All individuals had played for at least 5 years, and the number of years of training was not a significant covariate. A progressively greater effect on bilateral asymmetry was seen in individuals who had started playing earlier: in the humeral shaft, the earlier starters averaged 20–24% asymmetry, while the older starters averaged 8–10% asymmetry. This result has been repeatedly cited as providing evidence for the age-specificity of mechanical loading on bone (Fig. 6 in Turner and Robling, 2003; Fig. 12 in Pearson and Lieberman, 2004; but note misattribution of data to another study), and it does appear to demonstrate a declining response after early adolescence. However, it is also noteworthy that even the oldest starters in this study, averaging 34 years of age when they began play- ing, still had about three times greater bilateral asym- metry than the controls included in the study. Thus, even in this age range, increased mechanical loading appears to significantly increase bone mass. Because the study was cross-sectional in design, it is not possible to say for sure how much bone was actually gained during the playing period. However, from an anthropological perspective, the greater asymmetry in the older-starting players would be correctly interpreted as reflecting greater asymmetric use of the upper limbs during adult- hood.

As noted earlier, BMC and BMD are not true mechani- cal characteristics, and their interpretation can be con- founded by nonconcordant and nonlinear changes in sub- periosteal and endosteal breadths during growth and young adulthood (Frisancho et al., 1970; Ruff et al., 1994; Petit et al., 2004). Significant changes in bone strength can occur with relatively small changes in BMC or BMD (Robling et al., 2002; Warden et al., 2005). A number of studies incorporated more mechanically rele- vant properties into evaluations of exercise effects in human subjects, using the upper limb bilateral asymme- try model. Ruff et al. (1994) and Trinkaus et al. (1994) reanalyzed radiographic data originally collected by Jones et al. (1977) for male and female professional ten- nis players (mean age, 25 years; mean starting age, 10 years). Median asymmetry in the polar second moment of area, J (a measure of bending/torsional rigidity), of the mid-distal humeral shaft was 57%, compared to median asymmetries near 10% in recent ‘‘normal’’ popu- lations. We also found an age effect, with players who started playing earlier showing more asymmetry due to greater subperiosteal expansion. However, the individual in the sample who started playing the latest (a male who started playing at age 19 years) still showed asym- metry in J of 31%. Haapasalo et al. (2000), using periph- eral quantitative computed tomography (pQCT), found mean bilateral asymmetries of 39–46% in second moments of area of the humeral midshaft in male Finn- ish ‘‘national top-level’’ tennis players (mean age, 30 years; mean starting age, 10 years). No evidence for any exercise effect on cortical bone density was found, i.e., mechanical adaptation to increased loading appeared to be all geometric (paralleling animal exercise studies, e.g., Woo et al., 1981). Kontulainen et al. (2002), also using pQCT, measured bilateral asymmetry in geometry and bone density of the upper limb bones in a subset of the same sample studied by Kannus et al. (1995). Mean asymmetry in a strength index that combined J and bone density of the humeral midshaft was about 26% in