A new study of identical twins confirms that genetics are a poor predictor of disease. It's yet another indicator of how the public needs to understand genes as knobs and switches rather than blueprints. Here's my piece at The Atlantic.
A new study of identical twins confirms that genetics are a poor predictor of disease. It's yet another indicator of how the public needs to understand genes as knobs and switches rather than blueprints. Here's my piece at The Atlantic.
I was honored to be part of a discussion panel at The Franklin Institute this past weekend to kick off this year's EduCon conference. The conference is an offshoot of the Science Leadership Academy, an amazing new Philadelphia public high school, and its visionary founder Chris Lehmann. The open-ended question posed to the panel was: "What is Smart?" Here are my slightly-edited opening remarks:
What is smart? This is a really exciting time to ask that question. For a century, we've been living under the oppressive yoke of innate-IQism, the idea championed by Francis Galton, Charles Spearman, and Lewis Terman, among others, that intelligence was something you were endowed with--whatever you got, you got.
This was not the attitude of Alfret Binet and Theodore Simon, who invented the IQ test in 1905 in order to identify French schoolchildren in need of most attention. The Binet-Simon test aimed to lift students up rather than assign them a permanent ranking. Binet said:
"[Some] assert that an individual's intelligence is a fixed quantity which cannot be increased. We must protest and react against this brutal pessimism...With practice, training, and above all method, we manage to increase our attention, our memory, our judgment, and literally to become more intelligent than we were before."
But when the IQ test was adapted by the Stanford psychologist Lewis Terman, Binet's approach was replaced by a very different idea. Terman and his successors proclaimed that intelligence was a pre-loaded thing, and they packaged IQ tests in such a way that it seemed to prove that notion. In the last twenty years, that message has been reinforced by the very misleading idea of "heritability," which come from twin studies and have been interpreted by many as saying that intelligence is 50-60 percent inherited and pre-ordained by our individual genetic codes.
Now we know better, for two reasons.
First, we've learned a lot more about the relationship of biology to ability. The idea that genes contain instructions for a fixed intelligence doesn't wash anymore. Genes don't issue fixed instructions for anything. Rather, genes interact with their environments. The process is totally dynamic and "interactionist." McGill University's Michael Meaney expresses it this way: "There are no genetic factors that can be studied independently of the environment, and there are no environmental factors that function independently of the genome. [A trait] emerges only from the interaction of gene and environment."
So it's not just the brain that is "plastic." This is also happening on a cellular level throughout our bodies. We call this "genetic expression"--our genes are constantly being turned on and off constantly by our environment.
This is kind of a mind-blower of an idea, and takes some getting used to, but the bottom line is that all complex traits in human beings are the result of a dynamic process--and we can and do influence that process with our culture, our parenting, our teaching, and our desires and actions as individuals.
Second, we now know from Betty Hart, Todd Risley, Robert Sternberg, Anders Ericsson, Carol Dweck, James Flynn, and many researchers that intelligence is, as Sternberg says, "a set of competencies in development."
In other words, intelligence is also a process. It is malleable. Getting kids to understand that malleability is vitally important. Carol Dweck's work powerfully reinforces that notion. Having the I-can-improve mindset rather than the some-people-are-just-gifted-and-others-aren't mindset is critical to achievement.
We need to talk about achievements and abilities as a matter of development rather than innate ability. That doesn't mean we pretend that we or our kids have total control over our lives--many influences come into play. But should imbue them with the wonder of what is possible.
Photo credit: Sarah Sutter
In honor of the 150th anniversary of the publication of Charles Darwin's On the Origin of the Species, let's meet the man who arguably did more to corrupt his ideas than any figure in history: his half-cousin Francis Galton.
Galton was an influential anthropologist and statistician who lived about forty miles from Darwin's home in Kent, and who interacted with him frequently. After the publication of Darwin's 1859 landmark work, which introduced the first coherent view of natural selection, Galton was among the first to recognize its importance and to see a unique opportunity to advance his own ideas. Galton immediately sought to further define "natural selection" by arguing that differences in human intellect were strictly a matter of biological heredity -- what he called the "hereditary transmission of physical gifts."
Galton did not share the cautious scientific temperament of his cousin Darwin, but was a forceful advocate for what he believed in his gut to be true. In 1869, he published Hereditary Genius, arguing that smart, successful people were simply "gifted" with a superior biology. In 1874, he introduced the phrase "nature and nurture" (as a rhetorical device to favor nature). In 1883, he invented "eugenics," his plan to maximize the breeding of biologically-superior humans and minimize the breeding of biologically-inferior humans. All of this was in service to his conviction that natural section was driven exclusively by biological heredity, and that the environment was just a passive bystander. In fact, it was actually Galton, not Darwin, who laid the conceptual groundwork for genetic determinism. Galton wrote:
"Biographies show [eminent men] to be haunted and driven by an incessant instinctive craving for intellectual work. They do not work for the sake of eminence, but to satisfy a natural craving for brain work, just as athletes cannot endure repose on account of their muscular irritability, which insists upon exercise. It is very unlikely that any conjunction of circumstances, should supply a stimulus to brain work, commensurate with what these men carry in their own constitutions."
Darwin himself later succumbed to this view, writing in "The Descent of Man":
"We now know, through the admirable labours of Mr. Galton, that genius. . . tends to be inherited."
It has taken us 150 years to unwind that scientific conviction. It may take 150 more to unwind the public misperception.
One of the things I hope to do in this space is facilitate communication between scientists and science writers about how to best describe complex scientific research to the public. After hearing some concern from University of Iowa neuroscientist and Behavioral Neuroscience Editor-in-Chief Mark Blumberg about Nicholas Wade's recent New York Times story, "Speech gene shows his bossy nature," I invited Blumberg to submit an open letter to Wade. Here it is, along with Wade's response:
Dear Nicholas Wade,
I'm very sympathetic to the terrific challenges you face in making new scientific research appealing and digestible to the public. But I have some specific concerns about your latest report on the FOXP2 gene, beginning with the headline, "Speech gene shows his bossy nature."
Can we really still call FOXP2 a "speech gene"? As you know, this FOXP2 mutation was originally identified in a London family, many of whose members exhibited profound language impairments. From that single observation, it became known as the "speech gene." But there was always the distinct possibility that the mutation influenced a myriad of other brain and body functions that, in turn, affected speech. Indeed, given all that we know about how genes work - as well as our sad history with grandiose claims about single-gene effects on behavior - wouldn't it be wise for all of us to be more cautious when communicating these findings to the public? As for people with FOXP2 mutations, a well-informed colleague has told me that they do indeed exhibit a variety of problems beyond those related to speech, just as we would expect. I fear that these other problems have not been adequately studied precisely because they detract from the preferred "speech gene" narrative.
As to its "bossy" nature, you write that FOXP2 "does not do a single thing but rather controls the activity of at least 116 other genes." That's true, but let's put it in context. As you know, such distributed effects are nothing new; genes are always part of complex networks and therefore are hardly ever expected to do a single thing. Thus, FOXP2 is part of a large, complex network of genetic and non-genetic factors that, under the right developmental conditions, appear to contribute to the human faculty for language - and a lot of other things as well. In fact, many studies have now shown conclusively that mice with the FOXP2 mutation exhibit changes in a myriad of organ systems, including the lung and brain. And yet FOXP2 is called a "speech gene" rather than a "lung gene" or a "brain gene."
I suggest that the better alternative is to describe FOXP2 in less dramatic terms - which you do very nicely when you write that "the whole network of [language-related] genes has evolved together in making language and speech a human faculty." It's frustrating, then, to read references in the same article to a simplistic and outmoded view of gene action - for instance, when you write of "genes under FOXP2's thumb" and FOXP2 as "a maestro of the genome." The new Nature findings actually portray a more sober view of FOXP2's powers.
This is not the first time that you have written about FOXP2 in the Times. Last May, you wrote an article entitled "A human language gene changes the sound of mouse squeaks." The subject of your piece was another scientific article, this one published in Cell, that reported on changes to brain and behavior in mice engineered to express the human version of the FOXP2 gene. One of the authors of that paper is quoted by you as promising that "We will speak to the mouse." An extraordinary promise coming from a scientist, don't you think?
It was the link to human language that garnered that mouse study so much acclaim. And what did they find: that the "humanized" infant mice emit vocalizations of a slightly lower pitch than typical infant mice. Having researched similar vocalizations in rats for many years, I knew before reading the Cell paper that the findings almost certainly had nothing to do with human language. In fact, any manipulation that alters the body size or respiratory system or larynx or a host of other factors in these animals could account for the small change in pitch of the mouse vocalizations. Given FOXP2's influence on so many organ systems, it would have been astonishing if their vocalizations had not been affected.
Trumping up FOXP2 as yet another star gene in a series of star genes (the "god" gene, the "depression" gene, the "schizophrenia" gene, etc.) not only sets FOXP2 up for a fall; it also misses an opportunity to educate the public about how complex behavior - including the capacity for language - develops and evolves.
Mark S. Blumberg, Ph.D.
F. Wendell Miller Professor, Department of Psychology, University of Iowa
Editor-in-Chief, Behavioral Neuroscience
REPLY FROM NICHOLAS WADE:
I'm a little puzzled by your complaint, which seems to me to ignore the special dietary needs of a newspaper's readers and to assume they can be served indigestible fare similar to that in academic journals.
You question whether FOXP2 can be called a speech gene and you suggest it could equally well be called it a lung gene. But language is more interesting to most people, scientists included, than is lung function. It's because of FOXP2's connection with language that so many labs are working on it. So I cannot see any reasonable objection to calling it "a gene that underlies the faculty of human speech."
The role of this article was to update readers on a new finding, not to review the history of ideas about FOXP2. So there's no space to go into the argument about the gene's precise involvement with speech and language, much of which we have covered in earlier articles.
I won't comment on the headline on the article - reporters don't write headlines and are generally not consulted about them.
I don't see what's wrong in calling FOXP2 a "maestro of the genome," a phrase that would apply to many transcription factors. Yes indeed the gene is expressed in several other tissues besides the brain. But I had 550 words in which to set the story up in non-technical language, explain why it was interesting, and give general readers a flavor of what the researchers had found. There was simply no space for the qualifications you mention and they are not essential to the story.
You cite an earlier article about the mouse which Svante Paabo genetically engineered to carry a human FOXP2 gene. Then you ask if I didn't realize that Paabo was making "an extraordinary promise" in saying "We will speak to the mouse." Well, no, I didn't - I thought it was obvious he was making a joke. He's surely implying the mouse is rather unlikely to speak to him.
Your view is that Paabo's paper on his FOXP2 mouse was of little interest, and it's true that he and Wolfgang Enard only found a large number of rather subtle changes, including slightly different isolation whistles. But I think most people would say the experiment was important and needed to be done, even if we don't really understand yet what all the changes mean. That's why I thought it was worth writing up.
I don't understand your complaint that FOXP2 is being given star treatment. It's in the limelight because it's a really interesting gene that may provide the entryway to a major human faculty. If it fails to do so, we'll write that up too. Are you suggesting we should tell our readers nothing about FOXP2 for the next 10 years until we have a definitive answer? - That's the role for encyclopedias and review articles.
As for missing an opportunity to educate the public, that, with respect, is your job, not mine. Education is the business of schools and universities. The business of newspapers is news.
Reporter, The New York Times
Author, The Faith Instinct
I appreciate the intense reactions to this blog so far, and respect the lingering skepticism. (Some of the nastiness I could do without, but it wouldn't be the Internet without some tasty pot-shots). I certainly didn't expect to win over the entire crowd with a handful of short overview pieces containing little evidence and no depth. I get that smart Atlantic readers are going to scrutinize this stuff.
After three years of discussing it among friends, I also understand that this is an issue that often provokes a visceral response. We all have strong opinions about how we became who we are. We need to have these opinions--it's a part of forming an identity. After a century of genetic dogma, terms like "innate" and "gifted" are baked right into our language and thinking. I don't mean to suggest that we all believe that genes control everything. Instead, most of us believe in "nature" followed by "nurture": genes dispense various design instructions as our body is formed in utero, priming us with a certain level of intellectual, creative, and athletic potential; following this, environmental influences develop that potential to some extent or another. This is what we know to be true, and it makes perfect sense.
Well, it turns out not to work that way. But no one familiar with the new science of development expects these old beliefs to wash simply away in a few weeks or months just because a few smart-ass writers come along and say they know better.
As we try to present this stuff, there are a hundred sand traps of understandable confusion. When I argue that "innate" doesn't really exist, it may seem like I'm making the blank slate argument -- which I'm not. When I argue that talent is a process, it may seem like I'm arguing that anyone can do anything, which I'm not. When I argue that we can't really see our individual potential until we've applied extraordinary resources over many years, it may seem like I'm arguing that genetic differences don't matter -- which I'm not. When I criticize The Bell Curve, it may look like I'm an agent of the left pushing a liberal egalitarian agenda, which I'm not.*
What I am pushing is the consideration of a whole new paradigm. In doing so, I am of course just a conduit.
Meet some of the sources. Two weeks ago, a group of neuroscientists, psychologists, and cognitive scientists at the University of Iowa published a paper entitled "Short Arms and Talking Eggs: Why We Should No Longer Abide the Nativist-Empiricist Debate" in the journal Child Development Perspectives.
While their evidence is quite complex, their renewed argument is simple: "nature vs. nurture" doesn't adequately explain how we become who we are. That notion needs to be replaced.
Lead author John Spencer:
The nature-nurture debate has a pervasive influence on our lives, affecting the framework of research in child development, biology, neuroscience, personality and dozens of other fields. People have tried for centuries to shift the debate one way or the other, and it's just been a pendulum swinging back and forth. We're taking the radical position that the smarter thing is to just say 'neither' -- to throw out the debate as it has been historically framed and embrace the alternative perspective provided by developmental systems theory.**
"Developmental systems theory" is a vague mouthful, and the scientists behind these observations readily admit that they haven't yet found the most compelling new language to present their ideas to the public. But the basic idea, as I've written in previous posts, is that genes are not static; they are dynamic. Genes interact with the environment to form traits. The more closely scientists look at claims of so-called "hard-wired" behavior and abilities, the more they turn up evidence that actions and talents are formed in conjunction with the culture around them.
John Spencer again:
Researchers sometimes claim we're hard-wired for things, but when you peel through the layers of the experiments, the details matter and suddenly the evidence doesn't seem so compelling...When people say there's an innate constraint, they're making suppositions about what came before the behavior in question. Instead of acknowledging that at 12 months a lot of development has already happened and we don't exactly know what came before this particular behavior, researchers take the easy way out and conclude that there must be inborn constraints. That's the predicament scientists have gotten themselves into.
Imprinting is one of many examples reviewed by the Iowa researchers. In 1935, Viennese zoologist Konrad Lorenz famously discovered that newborn chicks whose eggs were incubated in isolation will still correctly pick the call of their mother over another animal. It seemed the perfect little proof of innate ability.
But in 1997, Gilbert Gottlieb discovered the flaw in that assumption. It turned out that when fetal chicks were deprived of the ability to make vocal sounds inside their own eggs -- that is, the ability to teach themselves what their species sounded like -- they were unable to pick the correct maternal sound from various animals.
Another famously innate quality is "dead reckoning," the ability of fish, birds, and mammals (including humans) to establish one's current location based on past locations and movement history. How could young geese know how to fly home from 100 meters without trial and error? Being a mystery with no apparent answer, the word "innate" was again used as a catch-all explanation. Then it became clear that mother geese train their gosslings' navigational skills through daily walks.
How could baby chicks find their way back to a mother without clear sight of her? It turned out that they simply reversed the directions they had taken when getting lost.
One by one, the Iowa researchers show, scientists have declared basic abilities to be explainable only by hard-wiring only to later have a slow learning process revealed under closer inspection and better tools. The consistent refrain: abilities form in conjunction with development, community, and context. Genes matter, but actual results require genetic expression in conjunction with the environment.
(One big problem with this new paradigm, explains John Spencer, "is that it's much more complicated to explain why the evidence is on shaky ground, and often the one-liner wins out over the 10-minute explanation.")
The Iowa paper also delves deeply into claims of human language innateness, including what is known as "shape-bias." "Shape bias," the authors write, "simpliﬁes the word learning situation and thereby aids vocabulary development, but it is not innate. Rather, it is the emergent product of a step-by-step cascade."
What does all of this have to do with Einstein's genius or your piano playing? Developmental systems theory tells us that, while genetic differences do matter, they cannot, on their own, determine what we become. From there, the whole idea of innate talent falls apart.
As this blog continues, you'll meet more of the scientists who are documenting and shaping these ideas. One of the things I'd like to do is bring them together as a community and give their umbrella notion a more accessible name. "Developmental genetics" is one possibility. "Environmental genetics" is another.
Suggestions are welcome.
* I am guilty of being a liberal on most issues, and there are elements of this new paradigm that gel nicely with a liberal sensibility; but there are also some very uncomfortable moral implications to come to terms with. Every writer has biases to be sure, but self-respecting journalists don't ignore or cherry-pick information because they like its political ramifications. I didn't write Data Smog because I wanted to bring down the Internet; I didn't offer some sanguine views on new surveillance technologies because I desire a police state, and I haven't been picking and choosing genetics and intelligence studies to prop up the Obama administration.
** These John Spencer quotes are taken from an University of Iowa press release about the journal article.
In providing an overview for this new blog's approach, I've so far touched on genetics and intelligence; now it's onto studies of talent and expertise that provide the third key puzzle piece. Taken together, they suggest -- to me at least -- a whole new way to think about high achievement.
Many of you have already read about some of the key research -- the famous 10,000-hours-to-greatness observation of Anders Ericsson and others, described in several recent smart books, including Geoff Colvin's Talent Is Overrated, Malcolm Gladwell's Outliers and Daniel Coyle's The Talent Code.*
These studies are important, not because they put a specific hour-number on what it takes to be a champion, but because of the big idea behind that number. The breathtaking insight that comes through in the work of Ericsson and colleagues is this: talent is not a thing, but a process -- a very slow, largely invisible process that, up till now, has been nearly impossible to document and therefore very easy to misread. As long as this slow accretion of skills went unseen and unarticulated, the mature skills themselves seemed almost magic. For many centuries, greatness appeared to be god-given; later, in the 20th century, it was understood as gene-given. All along, these ideas were reinforced by astounding child prodigy stories that seemed to be explainable only by unusual innate "gifts."**
Now, Ericsson and colleagues -- there are many, with hundreds of studies already published -- are making the invisible visible.*** They are showing how all abilities are based in process. They are exploding the myth of "giftedness."
Their work also dovetails with genetic-environment interaction, and with research showing how extraordinarily plastic the human brain is -- how we constantly change its structure with our moment-to-moment actions.A new understanding thus emerges: the limits we think we see in ourselves and our kids are really more like obstacles, difficult but not impossible to overcome. What appear to be innate/genetic brick walls are actually just very steep hills to climb. According to this view, the real marvel of genetics is how their dynamic properties allow us to expand and expand and expand our abilities -- if we push hard enough and have the right resources. (These are big ifs.)
Which brings us back to the public fixation with innateness. Given what we've all been told about genes, it's perfectly understandable when we look at a clumsy 8 year-old boy and surmise: "He's got no athletic talent. He just doesn't have the genes for it." But the new science of talent suggests a very different conclusion:
• His clumsiness was developed, not inborn. He became clumsy over time in response to many gene-environment interactions.
• His development continues, and nothing is set in stone. While the odds are of course against him, no can say for certain whether this clumsy boy has professional sports in his future.
We simply don't know his ultimate potential, and neither will he until he marshals all of his resources to get there.
Genes will play a huge role, of course, and will ultimately limit him in some way. But we don't know precisely how.
Discovering our own potential is part of the marvel of being alive.
* I began writing (and blogging) about this stuff in 2007, long before any of these books were published. My book will come late behind these books, and will probably be dismissed by some as Johnny-come-lately. But I think mine has much to add, and hope it will be seen as a complement to them. The reality is, all of these books (including mine) were written concurrently; I, for one, did not read any of them before finishing mine.
** I'll tackle the issue of child prodigies in future posts, and in my book.
*** Here's a tiny sampling of the studies from Ericsson and colleagues:
(Photo credit for the picture of the brain: http://www.flickr.com/photos/17657816@N05/1971827663)
A large number of websites and even quite a few books will tell you that Wolfgang Amadeus Mozart's IQ was 165. They'll also reveal that Benjamin Franklin's IQ was 160, Charles Dickens' was 180, Isaac Newton's was 190, and Blaise Pascal's was 195. There's
only one small problem with this data: The IQ test was invented in the early
20th Century -- long after all of these people were dead and buried. Here's
how this lunacy came about: The IQ test was first invented in France by Alfred
Binet in the late 19th Century as a way to measure academic skills and pick out
the students who were not learning as fast they could and should. It was not
designed to separate innately-smart people from less innately-smart people.
Binet, in fact, did not believe intelligence was innate. He saw intelligence
not as a thing, but as a process of acquiring certain thinking skills. (He
turned out to be quite correct.) Then
along came Lewis Terman, a Stanford psychologist in the early 20th century who
preferred Francis Galton's idea of intelligence: a certain innate quality that
each person is born with. Terman reinvented the IQ test and sold it to American
intellectuals and policymakers as a way to separate the intellectual wheat from
the idiotic chaff. Terman also began an epic study on geniuses entitled, Genetic
Studies of Genius. Mind
you, he had no proof that intelligence was gene-based. (We still don't have any
such proof, contrary to what you might read elsewhere. See my post on heritability.) Terman
was well-funded and well-staffed. In 1926, he assigned one of his protégés,
Catharine Cox, to somehow adapt their new IQ test to estimate the IQs of 301
well-known historical figures. Here's
the rub: Even in Terman's context, this made no sense. It was pure intellectual
foolishness, even if you completely accepted his argument that IQ detected
innate intelligence. That's not just because none of these people actually took
an IQ test, but also because IQ tests only measure people's academic skills
against other people their same age. The actual score is not an actual score of
right vs. wrong answers, but a weighted score to compare every test-taker's
performance with every other same-age test-taker's performance in that
particular year. 100 is always the median. A score of 100 means that 50% of the
same-age students scored higher than you, and 50% scored lower. So
how could anyone possibly hope to go back in time, look at the work of dead
people, and deduce their IQ score? It was impossible. In
her report[i], Cox
acknowledged: "The correction attempted in the
present report is a crude approximation . . ." Cox
and Terman assigned a score of 200 to their hero Francis Galton. That would
make him one of the great geniuses of all time. Thanks partly to this study, IQ has become one of the great myths of our time. It's going to take us another century to replace it with a more sensible understanding of intelligence. __________________
A large number of websites and even quite a few books will tell you that Wolfgang Amadeus Mozart's IQ was 165. They'll also reveal that Benjamin Franklin's IQ was 160, Charles Dickens' was 180, Isaac Newton's was 190, and Blaise Pascal's was 195.
There's only one small problem with this data: The IQ test was invented in the early 20th Century -- long after all of these people were dead and buried.
Here's how this lunacy came about: The IQ test was first invented in France by Alfred Binet in the late 19th Century as a way to measure academic skills and pick out the students who were not learning as fast they could and should. It was not designed to separate innately-smart people from less innately-smart people. Binet, in fact, did not believe intelligence was innate. He saw intelligence not as a thing, but as a process of acquiring certain thinking skills. (He turned out to be quite correct.)
Then along came Lewis Terman, a Stanford psychologist in the early 20th century who preferred Francis Galton's idea of intelligence: a certain innate quality that each person is born with. Terman reinvented the IQ test and sold it to American intellectuals and policymakers as a way to separate the intellectual wheat from the idiotic chaff. Terman also began an epic study on geniuses entitled, Genetic Studies of Genius.
Mind you, he had no proof that intelligence was gene-based. (We still don't have any such proof, contrary to what you might read elsewhere. See my post on heritability.)
Terman was well-funded and well-staffed. In 1926, he assigned one of his protégés, Catharine Cox, to somehow adapt their new IQ test to estimate the IQs of 301 well-known historical figures.
Here's the rub: Even in Terman's context, this made no sense. It was pure intellectual foolishness, even if you completely accepted his argument that IQ detected innate intelligence. That's not just because none of these people actually took an IQ test, but also because IQ tests only measure people's academic skills against other people their same age. The actual score is not an actual score of right vs. wrong answers, but a weighted score to compare every test-taker's performance with every other same-age test-taker's performance in that particular year. 100 is always the median. A score of 100 means that 50% of the same-age students scored higher than you, and 50% scored lower.
So how could anyone possibly hope to go back in time, look at the work of dead people, and deduce their IQ score? It was impossible.
In her report[i], Cox acknowledged: "The correction attempted in the present report is a crude approximation . . ."
Cox and Terman assigned a score of 200 to their hero Francis Galton. That would make him one of the great geniuses of all time.
Thanks partly to this study, IQ has become one of the great myths of our time. It's going to take us another century to replace it with a more sensible understanding of intelligence.
[i] "The Early Mental Traits of Three Hundred Geniuses," by Catharine M. Cox, from Genetic Studies of Geniu, edited by Lewis M. Terman. Stanford University Press, 1926.
"One of the most celebrated findings in modern psychiatry — that a single gene helps determine one’s risk of depression in response to a divorce, a lost job or another serious reversal — has not held up to scientific scrutiny, researchers reported Tuesday."
"The authors reanalyzed the data and found 'no evidence of an association between the serotonin gene and the risk of depression,' no matter what people’s life experience was, Dr. Merikangas said.
"By contrast, she said, a major stressful event, like divorce, in itself raised the risk of depression by 40 percent."
As a general rule, don't listen to anyone telling you that there's a "gene for" this or that. Even if there's an Ph.D. or M.D. at the end of the name, it's an old and misleading way of discussing genetics.
Thankfully, it's not just the science that's improving. Reporting on genetics has also been getting demonstrably better. Today's piece is a nice example, as is this extraordinary piece by Carl Zimmer from last November.
Added June 30: A new piece by John Grohol, "Chasing the Genetic Ghosts of Mental Illness," speaks to this same critique.
For a few decades now, we science writers have been unwitting victims of a scientific muddle called "heritability." Now we have a chance to wipe the slime off and do our jobs.
The popular confusion started in 1979, when University of Minnesota psychologist Thomas Bouchard became fascinated with a particular pair of long-separated identical twins, and adopted what he thought was a method to distinguish genetic influences from environmental influences -- to statistically separate nature from nurture. The approach was to compare the ratio of similarities/differences in separated-identical-twins with the same ratio in separated-fraternal-twins. Since identical twins were thought to share 100% of their DNA and fraternal twins share, on average, 50% of their genetic material (like any ordinary siblings), comparing these two unusual groups allowed for a very tidy statistical calculation.
Bouchard and colleagues used the words "heritable" and "heritability" to describe their results.
There were just two problems with this approach. First, these terms were possibly the most misleading in scientific history. Second, it turns out that genetic influence cannot be separated from environmental influences. Nature is inextricably intertwined with nurture.
Strangely, "heritability" and "heritable" were actually never intended by behavior geneticists to mean what they sound like -- "inherited." What they called "heritability" was defined as "that portion of trait variation caused by genes." In a quick glance, that might seem awfully similar to "the portion of a trait caused by genes." But the difference is as great as Mt. Everest and the anthill in front your home.
This led to quite the muddle when Bouchard and others published twin-study data that seemed to demonstrate that intelligence was 60%-70% "heritable." What was that actually supposed to mean?
It did not mean that 60-70% of every person's intelligence comes from genes.
Nor did it mean that 30-40% of every person's intelligence comes from the environment.
Nor did it mean that 60-70% of every person's intelligence is fixed, while only 30-40% can be shaped.
What Bouchard et al intended it to mean was this (read v e r y slowly): on average, the detectable portion of genetic influence on the variation in -- not the cause of -- intelligence among specific groups of people at fixed moments in time was around 60-70%.
If that sounds confusing, that's because you are a human being. "Heritability" is so confusing that most of the people who use it professionally don't really understand it. Let's pick it apart:
Heritability, explains author Matt Ridley in his book Nature via Nurture "is a population average, meaningless for any individual person: you cannot say that Hermia has more heritable intelligence than Helena. When somebody says that heritability of height is 90 percent, he does not and cannot mean than 90 percent of my inches come from genes and 10 percent from my food. He means that variation in a particular sample is attributable to 90 percent genes and 10 percent environment. There is no heritability in height for the individual."
"Cause of variation" is not remotely the same as "cause of trait."
In discussing "heritability" in the media, scientists have allowed the public to confuse "causes of variation" with "causes of traits." Heritability studies do not, and cannot, measure causes of traits. They can only attempt to measure causes of differences (or variation) in traits.
So, for example, a heritability study cannot even attempt to measure the cause of plant height. It cannot purport to tell you that some percent of plant height is caused by genes.
What it can attempt to do is measure the percentage influence that genes have on the differences in height in a particular group of plants. But the percentage would only apply to that particular group.
Heritability derives from a fixed moment in time. It can only report on how life is, at that moment, for the specific group studied. It cannot offer any guidance whatever about the extent to which a trait can be modified over time, or project how life can be for any other group or individual enjoying different resources or values.
This means that is these studies don't even pretend to say anything about individual capability, or potential.
Finally, many scientists now think that twin-study heritability estimates are sorely compromised by a basic flawed supposition. "[They] rest on the extraordinary assumption that genetic and environmental influences are independent of one another and do not interact," explains Cambridge biologist Patrick Bateson "That assumption is clearly wrong."
Now you'll have a sense how much salt to ingest when you come across silly phrases like this in the news:
"Since personality is heritable. . ." (The New York Times)
"Forty percent of infidelity [can] be blamed on genes" (The Daily Telegraph)
"Men's Fidelity Controlled By 'Cheating Genetics'" (The Drudge Report)
In the end, by parroting a strict "nature vs. nurture" sensibility, heritability estimates are statistical phantoms; they purport to represent something in populations that simply does not exist in actual biology. It's as if someone tried to determine what percentage of the brilliance of "King Lear" comes from adjectives. Just because there are fancy methods available for determining distinct numbers doesn't mean that those numbers actually have any meaning.
We live in a golden age of scientific discovery -- so much so that it can often seem overwhelming to keep track of all the observations (many of which seem to contradict one another). It's also usually impossible to identify the truly vital breakthroughs as they occur. We usually need time to help settle that for us.
Washington, DC February 3, 2009 – A mother’s life experience can affect the biology of her offspring, according to new animal research in the February 4 issue of The Journal of Neuroscience. The study shows that a stimulating environment improved the memory of young mice with a memory-impairing genetic defect and also improved the memory of their eventual offspring. The findings suggest that parental behaviors that occur long before pregnancy may influence an offspring’s well-being.