Sunday, February 13, 2011

Documenting the sacrifice

Whether working in remote regions or with poisonous animals, biologists often put themselves in peril in the name of discovery. The first organized expeditions of naturalists sailed to exotic lands, their bravery supplemented by their curiosity, were threatened by storms while at sea, new and deadly diseases, unfamiliar animals, and new cultures. Add plane crashes and paramilitary groups to this list and not much has changed.

In a great New York Times piece titled 'Dying for Discovery', Richard Conniff recounts several stories of naturalists, ecologists and conservation biologists killed while pursuing their passion for discovery. But just how many field biologists have died while working to understand life's secrets? This is an interesting question, and begs the further question, are they adequately memorialized?
Gary Polis (1946-2000) desert ecologist, drown with four other biologists during a storm in the Sea of Cortez.

In an attempt to tell the stories of the fallen naturalists, Conniff hosts an interactive list, called the Wall of the Dead, which lists all biologists killed in the field and that he has a record of. People are able to add names, and I've visited this list several times over the past month and it has grown substantially. I've known a few field biologists that have died -and added one to the list, and know several that survived near-death experiences, and this list is a great and important monument to their memories.

Monday, February 7, 2011

Further studies of the decline effect find decline of the decline effect

“The Truth Wears Off: Is something wrong with the scientific method?”

The Decline Effect explored in an article by Jonah Lehrer in the New Yorker refers to a temporal decline in the size of an observed effect: for example, the therapeutic value of antidepressants appears to have declined threefold since the original trials. Based on the cases presented, this effect is not limited to medical and psychological studies. One example in evolutionary biology is the relationship between physical symmetry and female choice: initial studies consistently found strong selection for symmetry in mates by females, but as time passed, the evidence grew increasingly smaller.

This may be a result of selective reporting – scientists focus on results that are novel and interesting, even if they are in fact simply statistical outliers, or worse, the result of unconscious human bias. This sentiment is troubling; humans – scientists or not– are proficient pattern finders, but our subconscious (or conscious) beliefs influence what we search for. Lehrer argues that replication – the process of carrying out additional, comparable but independent studies – isn’t an effective part of the scientific method. After all, if study results are biased, and replications don’t agree, how can we know what to trust?

I don’t disagree with most of the article’s points: that scientists can produce biased results, PhD not withstanding, that more effort and time should be invested in data collection and experimental methodology, that the focus on 5% statistical significance is problematic. For one, it’s not clear from the article how prevalent the decline effect is. However, I wonder whether Lehrer, similar to the scientists he’s reporting on, has selected specific, interesting data points, while ignoring the general trend of the research. In 2001, Jennions and Moller published evidence of a small negative trend in effect size over time for 200+ studies, however, they suggest this is due to a bias toward high statistical significance, which requires either large effect sizes (the early studies published), or small effect sizes in combination with large sample sizes (a scenario which takes more time).

Even if the decline effect is rampant, does it represent a failure of replicability? Lehrer states that replication is flawed because “it appears that nature often gives us different answers”. As ecologists though, we know that nature doesn’t give different answers, we ask it different questions (or the same question in different contexts). Ecology is complex and context-dependent, and replication is about investigating the general role of a mechanism that may have been studied only in a specific system, organism, or process. Additional studies will likely produce slightly or greatly different results, and optimally a comprehensive understanding of the effect results. The real danger is that scientists, the media, and journals over-emphasize the significance of initial, novel results, which haven’t (and may never be) replicated.

Is there something wrong with the scientific method (which is curiously never defined in the article)? The decline effect hardly seems like evidence that we’re all wasting our time as scientists – for one, the fact that “unfashionable” results are still publishable suggests that replicability is doing what it’s supposed to, that is, correct for unusual outcomes and produce something close to the average effect size. True, scientists are not infallible, but the strength of the scientific process today is that it doesn’t operate on the individual level: it relies on a scientific community made of peers, reviewers, editors, and co-authors, and hopefully this encourages greater accuracy in our conclusions.

Tuesday, February 1, 2011

Carinval #32 and still going strong

Want to know what people are talking about? The 32nd Carnival of Evolution is online, hosted by Denim and Tweed. Check it out, pass it along.