Monday, November 10, 2014

To Keep Invasive Asian Carp from the Great Lakes, Carp Catchers Get Creative

*Guest post by Noemie De Vuyst -one of several posts selected from the graduate EES3001 Scientific Literacy course at University of Toronto-Scarborough.

Some say fishing is a peaceful pursuit. Not so if you're one of the self-dubbed Carp-Hunters, a pair of Illinois fishing guides whose carp-catching antics have turned them into YouTube celebrities. Over the last three years, videos of their over-the-top methods have racked up hundreds of thousands of views.

They've netted carp while on water-skis, and speared them with samurai swords and costume Wolverine claws. In Illinois' rivers, the Asian carp are so abundant they practically jump into the outstretched nets themselves. In an ecosystem where the invasive species has largely displaced native fish, the Carp-Hunters’ new hobby has a higher purpose; “We care greatly about preserving out natural ecosystem”, their video’s intro reads. “Since we can’t bass fish anymore we have taken on this burden.”




Silver Carp in the Illinois River, 2009. Nerissa Michaels/Illinois River Biological Station, via Detroit Free Press. 

Kooky as their methods may be, the Carp-Hunters have something in common with government agencies on either side of the Great Lakes; they're both battling the highly invasive Asian carp.
Though the U.S. and Canadian officials may not be going after the invaders with the same flair – not everybody gets to name their fishing boat the “Carpocalypse” - they've been labouring to keep the fish out of the Great Lakes since escapees from fish farms were discovered in the 1990's. With their enormous appetites and extraordinary ability to reproduce at speed, Asian carp would be disastrous to ecosystems and economies if they ever reached the Great Lakes.

First brought to North America in the 1970's, Asian carp already dominate some US waterways. The town of Havana, Illinois, just 85km downstream of Carp-Hunters fishing grounds, is thought to have one of the highest abundances of Asian carp on Earth. Here, the carp make up 60% of the fish community.

The uphill battle to keep carp from the Great Lakes has popped up in the news recently. In early October, the routine testing of 200 sites found a single sample of silver carp environmental DNA (or eDNA) in the Kalamazoo River, a tributary to Lake Michigan.

What does it mean that this one sample tested positive? The presence of eDNA shows only that silver carp material was present at the site. What it can't tell us is whether the carp was alive, or how many fish there might have been. In fact, the presence of eDNA doesn't tell us that a silver carp was present at the site at all; it's possible that scales or tiny amounts of mucous were transported by boats or fishing equipment, or even in bird droppings. With no silver carp sightings in the Kalamazoo, it seems likely the positive result comes from one of these explanations.

Despite the low likelihood that silver carp had really spread to the Great Lakes, news of the positive eDNA result was quickly picked by many local news outlets. Within days, the US Fish and Wildlife Service sped through the collection and testing of 200 more samples, and appealed to anglers to report any carp sightings.

Why such a quick response for a finding with such high uncertainty? If Asian carp were to spread to the Great Lakes, it's feared they take over aquatic ecosystems and cause the fishing and angling industries millions of dollars of loss. Silver carp are especially worrisome, since they have a taste for the same microorganisms and algae that many native species rely on.

By late October, the results of Michigan's second batch of eDNA testing were announced; all samples were negative. For now, it seems the silver carp have crept no closer to the Great Lakes watershed. Canada and the US continue to monitor their waterways closely and to put in new measures to prevent the spread of the fish. This past July, Fisheries and Oceans Canada opened a new Asian Carp Science Lab. In a political climate that has squeezed environmental sciences from all sides, the funding of a new facility highlights the carps' immense potential to cause damage.

So even with their home-built contraptions, it looks like Illinois’ Carp-Catchers are doing their bit for the Great Lakes.


  
For more information on eDNA sampling at the Michigan Department of Natural Resources:
http://www.michigan.gov/dnr/0,4570,7-153--340230--rss,00.html
http://www.asiancarp.us/edna.htm

For details on the new Asian Carp Lab at Fisheries and Oceans Canada:
http://news.gc.ca/web/article-en.do?nid=865809

And to see those Carp-Hunters do their thing:

https://www.youtube.com/watch?v=hN2gMP3Q2Z4

Thursday, October 30, 2014

Deconstructing creationist "scientists"

I’ve been fascinated by creationism since I first moved to Tennessee over twelve years ago –home of the Scopes “monkey” trial. And though I’ve been away from Tennessee for about seven years now, creationism still fascinates me. I find it interesting not because their arguments are persuasive or scientifically credible –they’re absolutely not; but rather my interest in it is as a social or maybe psychological phenomenon. Why, in the light of so much compelling evidence, do otherwise intelligent people hold on to something that contradicts the record of life that surrounds us. I’m a biologist because I find the tapestry of life full of wonder and richness, with an amazing story to tell.

But what fascinates me most of all are trained scientists, who hold legitimate PhDs, who take up the cause of creationism. This is interesting from two angles –first the ‘scientists’ (more on them later), and second the organizations that support and fund their operations. Creationist organizations readily adopt and promote these scientist-turned-creationists, even though they routinely belittle and try to undermine working scientists. Its like the Republican party that dismisses the Hollywood elite as not real Americans, but proudly flaunting Chuck Norris or Clint Eastwood. When the PhDs are on the side of creationism, they are great scholars with meaningful expertise, and when they are against creationism (as are 99% of working scientists) they are elitist and part of a conspiracy.

Enter the latest parade of creationist scientists, who’s authority is meant to persuade the public, at a  ‘Origin Summit’ at Michigan State University in a few days. The first thing you see are four bespectacled PhDs, who are authoritized by the fact that they are PhD ‘scientists’. They are: Gerald Bergman, Donald DeYoung, Charles Jackson, and John Sanford. But, unfortunately for them, not all scientists are created equally.




What makes a scientist? That is not easily answered, but education is one element –and having a PhD from a recognized program and University is a good start. But being trained is not enough, there needs to be some sort of evaluation by the broader scientific community. First and foremost, a scientist needs to communicate their research findings to other scientists by publishing papers in PEER-REVIEWED academic publications. Peer-reviewed means that experts on the topic with examine your paper closely, especially the experimental design and analysis, a provide criticisms. All papers are criticized at this stage, but those with especially egregious problems will not be published. Scientists are also evaluated by other scientists when applying for research funds, being considered for promotion (for example, your record and papers should be sent to 5-8 scientists so they can evaluate the meaningfulness of your contributions), or being considered for scientific awards.

Table 1: How to know that you are doing science.

So then, the ability to publish and survive scrutiny is paramount to being a successful scientist. Of course someone who subscribes to science as conspiracy will say: “wait, then scientists control who gets to be a scientists, and so those with new or controversial ideas will be kept out of the club”. The next thing to understand is what makes a scientist “famous” within the scientific community. The most famous scientists of all time have overturned scientific orthodoxy –that is the scientists that were trailblazers and who came up with better explanations of nature. Many scientists appreciate new ideas and new theories, but work on these has to be scientifically robust in terms of methodology and analysis.

Now back to our Origin Summit scientists, how do they compare to normal expectations for a successful scientist? We will use the average expectations for an academic scientist to get tenure as our benchmark (Table 1). First, Gerald Bergman –biologist. He has a staggering number of degrees, some from legitimate institutions (e.g., Wayne State University), and some from unaccredited places with dubious legal standing (e.g., Columbia Pacific University). He had a real faculty position at Bowling Green University but was denied tenure in 1979. He claims that he was fired because of his anti-evolution religious beliefs (his claim –which to me says his creationism cannot be science). He went to court, and long-story-short he lost because he misrepresented his PhD to get a job in the first place. More importantly to our story here is, what was his record? Fortunately for us, scientific publications, like the fossil record, accurately reflect historical events. Looking through scholarly search engines for the period of time between 1976-1980 (when he would be making a case for tenure) I could only find one publication credited to G.R. Bergman, and it appears to be a published version of his dissertation on reducing recidivism among criminal offenders. Published theses are seldom peer reviewed, and this is certainly not biology. He does not meet our basic expectations for the scientific authority he is promoted as.



Next, is Donald DeYoung –astronomer. He is a professor in the Department of Science and Mathematics at Grace College, and Christian post-secondary institution. It has some accreditation, especially for some programs such as counselling and business. Its not fully accredited, but it seems to be a legitimate Christian school. I searched for legitimate peer-reviewed publications, which was tricky because there also exists another D. B. DeYoung, also on the math/astronomy side of the business. If we ignore his non-peer reviewed books, there may be only one legitimate publication from 1975 in the Journal of Chemical Physics, looking at a particular iron isotope –nothing to do with the age of the Earth or evolution. One paper, so he does not meet our expectations.

Third is Charles Jackson with a PhD in education. There is nothing meaningful on this guy to suggest he is a scientist by any stretch of the imagination. Next.

Finally, we have John Sanford, a geneticist. Now we are getting somewhere! How can a person who studies the basic building blocks of life, deny its role in shaping life? He is a plant breeder and was in an experimental agriculture station associated with Cornell University. I found about a dozen real papers published in scientific journals from his pre-tenure time. None are actually on evolution; they seem to be largely about pollen fertilization and transfer, and production of crops. His publications definitely changed later in his tenure, from basic plant breeding to creationist works. Most interestingly, he has a paper on a computer simulator called Mendel’s Accountant, published in 2007, that simulates genetic mutation and population fitness –the basic stuff of evolution, but which can presumably be used to support his theories about mutations causing ‘devolution’ and not the fuel for real evolution. I read the paper. The genetic theory underpinning is not in line with modern theory, and this is further evidenced by the scant referencing of the rich genetics literature. Most of the models and assumptions seem to be made de novo, to suit the simulation platform, instead of the simulator fitting what is actually understood about genetic mechanisms. I assume this is why the paper is not published in a genetics journal, but rather a computer science one, and one that is not listed in the main scientific indexing services (often how we judge a journal to be legitimate). Regardless, of the scientific specifics, Sanford is a legitimate scientist, and he is the one person I would love to ask deep questions about his understanding of the material he talks about.

The one thing to remember is that a PhD does not make one an expert in everything. I have a PhD in ecology and evolution, but I am not competent in basic physiology for example, and would/should not present myself as an authority to a broader public who may not know the difference between phylogeney and physiology.


So, at the end of the day, here is another creationist conference with a panel of scientific experts. One of the four actually deserves to be called that, and even then, he is likely to be talking about material he has not actually published on or researched. There is a reason why creationist organizations have a tough time getting real scientists on board, and instead are relegated to using mostly failed hacks, because there isn’t a scientific underpinning to creationist claims.

Monday, October 27, 2014

Making multi-authored papers work

Collaborative writing is almost unavoidable for ecologists – first author papers are practically a novelty these days, given the dominance of data-sharing, multidisciplinary projects, and large-scale experiments. And frankly, despite the inevitable frustrations of co-authors, collaborative writing tends to make a manuscript better. Co-authors help prevent things from getting too comfortable: too reliant on favourite references, myopic arguments, or slightly inaccurate definitions.

The easiest collaborative writing, I think, involves small numbers of authors. Writing with large groups of people – and for me that’s probably anything over 5 – has unique difficulties and challenges. Collaborative writing with large groups has two types of challenges: first, the problems innate in attempting to find consensus from many competing opinions; second, the logistical constraints and challenges that arise with having many authors attempting to contribute to a single manuscript.

I’ve recently been lead author/wrangler on a manuscript with 15 authors. It seems to be turning out really well, mostly because all of the authors are interested and invested: all 15 have made significant contributions to the text. I’m by no means an expert on the topic of large collaborations, but I wanted to share some of the things I learned (or wish I had known to start with). All of this assumes that the writing process is indeed collaborative; if it is actually one or two main authors and a bunch of non-writing authors this may be much simpler (if prone to its own set of frustrations).

Process: It’s important to determine how things are going to be done early on and keep everyone updated on how that process is going. If parts of the manuscript will be split up, or, if certain figures and analyses will be done by particular people, that should be determined early on and reasonable timelines agreed on. Whoever is managing or leading should keep in touch with all of the authors with updates and timelines, so the project doesn’t fall under the radar. Some thought should really go into what software you will be using, since once you’ve committed it’s difficult to switch. A lot of the time, frankly, you’re limited by the lowest common denominator– programs need to be broadly available (usually free or else very common) and not require a higher level of technical skill than most authors are comfortable with. This is the downside of using LaTex or GitHub, for example. It’s easier (better?) to use an inferior program than to have half of the authors struggle with the learning curve on the program you chose. For that reason, programs that centrally host files, papers, and analysis, like Dropbox, Google Drive or folders hosted on a private server are popular. As with every part of this process, version control, version control, version control. GitHub is the most common choice for version control of software code. Dropbox allows you to revert back to older versions of files, but with limits (unless you’re paying for the pro version, I think).

The more people that are involved, the more variation to expect from your plans: deadlines will be missed, key people will be on holidays, and not everyone will feel the same level of urgency. Note: if you give 10 academics a deadline, 1 person will be early, 7 will finish in the final hours before the deadline, and the rest will want an extension. Consider having explicit deadlines for important milestones, but assume you’ll need to provide some flexibility.

Editing and revising: In the best case scenario, writing with a large number of people is like having an extensive peer review before the paper ever gets published. If you can satisfy each of these experts, the chances of the manuscript making it through peer review unscathed are much higher.

When sending a draft out for edits and revisions from multiple authors it may be helpful to be clear on what you are hoping for from this revision. What should the other authors focus on? Scientific merit, appropriate references, clarity and structure, and/or grammar and style? It may be that any or all opinions are welcome, but getting edits of prose tense or “which” vs. “that” may not be helpful on an early draft.

I’m not sure if there is really a perfect program for collaborative writing/editing that fits the ‘lowest common denominator’ requirement. Optimally a program would be free or very common, require little in the way of installation, allow real time co-authoring, commenting, version-control, and easy import and export. Problems with compatibility between different operating systems, for example, can seem minor with a single user but turn into a nightmare when a document is being opened across many different systems and versions. For smaller papers, I think many academics simply email a copy of the manuscript (often as MS Word or a PDF) around to the authors, and that’s workable for 3 or 4 sets of comments. But dealing with 15 conflicted copies of a manuscript sounds like hell. Using Google Docs/Google Drive was the compromise choice, and it mostly fulfilled our needs, with some irritations. The benefits includes that Google Docs now has different editing modes: 'editing', 'suggesting', and 'viewing'. Only 'editing' allows direct changes to be made to the text. The 'suggesting' mode is more like ‘track changes’ in MS Word, and allows co-authors to comment, add or delete text, in such a way that the main author can later choose to accept or reject each suggestion. The biggest benefits of G.Docs are that co-authors can edit at the same time, in real-time, and so the comments tended to be very conversational, since each co-author can respond to other co-author suggestions. This really helps identify when there is consensus or different opinions among authors. The downside was particularly that some authors prefer being able to edit offline or in general follow the process they are most comfortable with. It seems like restructuring a manuscript is more difficult in shared manuscript, where others might disagree, than on a personal copy. If a few authors dislike collaborative edit, you will still end up with a few conflicting copies, no matter how hard you try to avoid them. There are probably better ways, although I haven’t figured them out yet, and hope someone will comment. For users of LaTex, there is an online collaborative program—writeLaTex—that might be useful. Also, though I’ve never tried it, penflip looks pretty promising as an alternative to Google Docs.

No matter what program you use, you’ll end up with many comments and edits, often conflicting opinions. I think it’s usually good best to defer to the subject matter expert – if a co-author wrote the seminal paper on the topic, consider what they say. That said, without a strong vision, many-authored papers can be unfocused, and trying to make everyone happy almost certainly will make no one happy. After taking into consideration all the comments and expert opinions, in the end the main author has the power :)

Postscript - Authorship order/inclusion/exclusion is always difficult when so many people are involved. Some advice here; also NutNet has some rather well thought out authorship guidelines.

Wednesday, October 15, 2014

Putting invasions into context

How can we better predict invasions?

Ernesto Azzurro, Victor M. Tuset,Antoni Lombarte, Francesc Maynou, Daniel Simberloff,  Ana Rodríguez-Pérez and Ricard V. Solé. External morphology explains the success of biological invasions. Ecology Letters (2014) 17: 1455–1463.

Fridley, J. D. and Sax, D. F. (2014), The imbalance of nature: revisiting a Darwinian framework for invasion biology. Global Ecology and Biogeography, 23: 1157–1166. doi: 10.1111/geb.12221

Active research programs into invasion biology have been ongoing since the 1990s, but their results make clear that while it is sometimes possible to explain invasions post hoc, it is very difficult to predict them. Darwin’s naturalization hypothesis gets so much press in part because it is the first to state the common acknowledgement that the struggle for existence should be strongest amongst closely related species, implying that ‘invasive species must somehow be different than native species to be so successful’. Defining more generally what this means for invasive species in terms of niche space, trait space, or evolutionary history has had at best mixed results. 

A couple of recent papers come to the similar-but rather different-conclusion that predicting invasion success is really about recognizing context. For example, Azurro et al. point out that despite the usual assumption that species’ traits reflect their niches, trait approaches to invasion that focus on the identifying traits associated with invasiveness have not be successful. Certainly invasive species may be more likely to show certain traits, but these are often very weak from a predictive standpoint, since many non-invasive species also have these traits. Morphological approaches may still be useful, but the authors argue that the key is to consider the morphological (trait) space of the invaders in the context of the morphological space used by the resident communities.
Figure 1. From Azurro et al. 2014. A resident community uses morphospace as delimited by the polygon in (b). Invasive species may fill morphospace within the same area occupied by the community (c) or (d)) or may use novel morphospace (e). Invasiveness should be greatest in situation (e). 
The authors use as an illustration, the largest known invasion by fish - the invasion of the Mediterranean Sea after the construction of the Panama Canal, an event known as the ‘Lessepsian migration’. They hypothesize that when a new species entering a community that fills some defined morphospace will face one of 3 scenarios (Figure 1): 1) they will be within the existing morphospace and occupy less morphospace than their closest neighbour; 2) they will be within the existing morphospace but occupy more morphospace than their closest neighbour; or 3) they will occupy novel morphospace compared to the existing community. The prediction being that invasion success should be highest for this third group, for whom competition should be weakest. Their early results are encouraging, if not perfect – 73% of species located outside of the resident morphospace became abundant or dominant in the invaded range. (Figure 2)
Figure 2. From Azurro et al. 2014. Invasion success of fish to the Mediterranean Sea in relation to morphospace, over multiple historical periods. Invasive (red) species tended to exist in novel morphospace compared to the resident community. 
A slightly different approach to invasion context comes from Jason Fridley and Dov Sax, who revision invasion in terms of evolution - the Evolutionary Imbalance Hypothesis (EIH). In the EIH, the context for invasion success is the characteristics of the invaders' home range. If, as Darwin postulated, invasion success is simply the natural expectation of natural selection, then considering the context for natural selection may be informative. 

In particular, the postulates of the EIH are that 1) Evolution is contingent and imperfect, thus species are subject to the constraints of their histories; 2) The degree to which species are ecologically optimized increases as the number of ‘evolutionary experiments’ increases, and with the intensity of competition (“Richer biotas of more potential competitors and those that have experienced a similar set of environmental conditions for a longer period should be more likely to have produced better environmental solutions (adaptations) to any given environmental challenge”); and 3) Similar sets of ecological conditions exist around the world. When these groups are mixed, some species will have higher fitness and possibly be invasive. 

Figure 3. From Fridley and Sax, 2014.
How to apply this rather conversational set of tenets to actual invasion research? A few factors can be considered when quantifying the likelihood of invasion success: “the amount of genetic variation within populations; the amount of time a population or genetic lineage has experienced a given set of environmental conditions; and the intensity of the competitive environment experienced by the population.” In particular, the authors suggest using phylogenetic diversity (PD) as a measure of the evolutionary imbalance between regions. They show for several regions that the maximum PD in a home region is a significant predictor of the likelihood of species from that region becoming invasive. The obvious issue with max PD being used as a predictor is that it is a somewhat imprecise proxy for “evolutionary imbalance” and one that correlates with many other things (including often species richness). Still, the application of evolutionary biology to a problem often considered to be primarily ecological may make for important advances. 
Figure 4. From Fridley and Sax 2014. Likelihood of becoming invasive vs. max PD in the species' native region.

Monday, October 6, 2014

What is ecology’s billion dollar brain?

(*The topic of the billion dollar proposal came up with Florian Hartig (@florianhartig), with whom I had an interesting conversation on the idea*)

Last year, the European Commission awarded 1 billion dollars to a hugely ambitious project to recreate the human brain using supercomputers. If successful, the Human Brain Project would revolutionize neuroscience. (Although skepticism remains as to whether this project is a more of a pipe dream than reasonable goal). For ecology and evolution, where infrastructure costs are relatively low (compared to say, a Large Hadron Collider), 1 billion dollars means that there is essentially no financial limitation on your proposal, so nearly any project, experiment, analysis, dataset, or workforce, is within the realm of possibility. The European Commission call was for a proposal for research to occur over 10 years, meaning that the constraints on project length (usually driven by grant terms and graduate student theses) are low. So if you could write a proposal, upon which there are essentially no constraints at all, what would it be for? (*if you think that 10 years is too limiting for a proper long-term study, feel free to assume you can set up the infrastructure in 10 years and run it for as long as you want).

The first thing I recognized was that in proposing the 'ultimate' ecological project, you're implicitly stating how you think ecology should be done. For example, do you could focus on the most general questions and start from the bottom. If this is the case, it might be most effective to ask a single fundamental question. It would not be unreasonable to propose to measure metabolic rates under standardized conditions for every extent species, and develop a database of parameter values for them. This would be the most complete ecological database ever, that certainly seems like an achievement. 

But perhaps you choose something that is still of general importance but less simplistic, and run a standardized experiment in multiple systems. This has been effective for the NutNet project. Propose to run replicate experiments with top-of-the-line warming arrays on plant communities in every major ecosystem. Done for 10 years, over a reasonably large scale, with data recorded on physiology and important life history events, this might provide some ability to predict how warming temperatures are affecting ecosystems. 

The alternative is embrace ecological complexity (and the ability to deal with complexity that 1 billion dollars offers). Given the analytic power, equipment, and man hours that 1 billion dollars can buy, you could record every single variable--biotic, abiotic, weather--in a particular system (say, a wetland) for every second of every day. If you don’t simply drown in the data you’ve gathered, maybe you can reconstruct that wetland, predict every property from the details. While that may seem a bit extreme, if you are a complexity-fatalist, you start to recognize that even the general experiments are quickly muddied by complexity. Even that simple, general list of species' metabolic parameters quickly spirals into complexity. Does it make sense to use only one set of standardized conditions? After all, conditions that are reasonable for a rainforest tree are meaningless for an ocean shark or a tundra shrub. Do you use the mean condition for each ecosystem as the standard, knowing that species may only interact with the variance or extremes in those conditions (such as desert annuals that bloom after rains, or bacteria that use cyst stages to avoid harsh environments). What about ontogenetic or plastic differences? Intraspecific differences?

It's probably best then to realize that there is no perfect ecological experiment. The interesting thing about the Human Brain project is that neuroscience is more like ecology than many scientific fields - it deals with complex organic systems with emergent properties and great variability. What ecology needs, ever so simplistically, is more data and better models. Maybe, like neuroscience, we should request a supercomputer that could located and incorporate all ecological data ever collected, across fields (natural history, forestry, agronomy, etc) and recognize the connections between that data, based on geography, species, or scale. This could both give us the most sophisticated possible data map, showing where the data gaps exist, and where areas are data-rich and ready for model development. Further, it could (like the Human Brain) begin to develop models for the interconnections between data. 

Without too many billion dollar calls going on, this is only a thought experiment, but I have yet to find someone who had an easy answer for what they would propose to do (ecologically) with 1 billion dollars. Why is it so difficult?

Monday, September 15, 2014

Links: Reanalyzing R-squares, NSF pre-proposals, and the difficulties of academia for parents

First, Will Pearse has done a great job of looking at the data behind the recent paper looking at declining R and p-values in ecology, and his reanalysis suggests that there is a much weaker relationship between r2 values and time (only 4% rather than 62% as reported). Because the variance is both very large within-years and also not equal through time, a linear model may not be ideal for capturing this relationship.
Thanks @prairiestopatchreefs for linking this.

From the Sociobiology blog, something that most US ecologists would probably agree on: the NSF pre-proposal program has been around long enough (~3 years) to judge on its merits, and it has not been an improvement. In short, pre-proposals are supposed to use a 5 page proposal to allow NSF to identify the best ideas and then invite those researchers to submit a full proposal similar to the traditional application. Joan Strassman argues that not only is this program more work for applicants (you must write two very different proposals in short order if you are lucky to advance), it offers very few benefits for them.

The reasons for the gender gap in STEM academic careers gets a lot of attention, and rightly so given the continuing underrepresentation of women. The demands of parenthood often receive some of the blame. The Washington Post is reporting on a study that considers parenthood from the perspective of male academics. The study took an interview-based, sociological approach, and found that the "majority of tenured full professors [interviewed] ... have either a full-time spouse at home who handles all caregiving and home duties, or a spouse with a part-time or secondary career who takes primary responsibility for the home." But the majority of these men also said they wanted to be more involved at home. As one author said, “Academic science doesn’t just have a gender problem, but a family problem...men or women, if they want to have families, are likely to face significant challenges.”

On a lighter note, if you've ever joked about PNAS' name, a "satirical journal" has taken that joke and run with it. PNIS (Proceedings of the Natural Institute of Science) looks like the work of bored post-docs, which isn't necessarily a bad thing. The journal has immediately split into two subjournals: PNIS-HARD (Honest and Real Data) and PNIS-SOFD (Satirical or Fake Data), which have rather interesting readership projections:


Friday, September 12, 2014

Do green roofs enhance urban conservation?

ResearchBlogging.orgGreen roofs are now commonly included in the design of new public and private infrastructure, bolstered by energy savings, environmental recognition and certification, bylaw compliance, and in some cases tax or other direct monetary incentives (e.g., here).  While green roofs clearly provide local environmental benefits, such as reduced albedo (sunlight reflectance), storm water retention, CO2 sequestration, etc., green roof proponents also frequently cite biodiversity and conservation enhancement as a benefit. This last claim has not been broadly tested, but existing data was assessed by Nicholas Williams and colleagues in a recent article published in the Journal of Applied Ecology.

Williams and colleagues compiled all available literature on biodiversity and conservation value of green roofs and they explicitly tested six hypotheses: 1) Green roofs support higher diversity and abundance compared to traditional roofs; 2) Green roofs support comparable diversity and composition to ground habitat; 3) Green roofs using native species support greater diversity than traditional green roofs; 4) Green roofs aid in rare species conservation; 5) Green roofs replicate natural communities; and 6) Green roofs facilitate organism movement through urban areas.

Photo by: Marc Cadotte


What is surprising is that given the abundance of papers on green roofs in ecology and environmental journals, very few quantitatively assessed some of these hypotheses. What is clear is that green roofs support greater diversity and abundance compared to non-green roofs, but we know very little about how green roofs compare to other remnant urban habitats in terms of species diversity, ecological processes, or rare species. Further, while some regions are starting to require that green roofs try to maximize native biodiversity, there are relatively few comparisons, but those that exist reveal substantial benefits for biodiverse green roofs.

How well green roofs replicate ground or natural communities is an important question, with insufficient evidence. It is important because, according to the authors, there is some movement to use green roofs to offset lost habitat elsewhere. This could represent an important policy shift, and one that may ultimately lead to lost habitats being replaced with lower quality ones. This is a policy direction that simply requires more science.

There is some evidence that green roofs, if designed correctly, could aid in rare species conservation. However, green roofs, which by definition are small patches in an inhospitable environment, may assist rare species management in only a few cases. The authors caution that enthusiasm for using green roofs to assist with rare species management needs to be tempered by designs that are biologically and ecologically meaningful to target species. They cite an example where green roofs in San Francisco were designed with a plant that is an important food source for an endangered butterfly, Bay Checkerspot, which currently persists in a few fragmented populations. The problem was that the maximum dispersal distance of the butterfly is about 5 km, and there are no populations within 15 km of the city. These green roofs have the potential to aid in rare species conservation, but it needs to be coupled with additional management activities, such as physically introducing the butterfly to the green roofs.

Overall, green do provide important environmental and ecological benefits in urban settings. Currently, very few studies document the ways in which green roofs provide ecological processes and services, enhance biodiversity, replicate other ground level habitats, or aid in biodiversity conservation. As the prevalence of green roofs increases, we will need scientifically valid ecological understanding of green roof benefits to better engage with municipal managers and affect policy.

Williams, N., Lundholm, J., & MacIvor, J. (2014). Do green roofs help urban biodiversity conservation? Journal of Applied Ecology DOI: 10.1111/1365-2664.12333

Monday, September 8, 2014

Edicts for peer reviewing

Reviewing is a right of passage for many academics. But for most graduate students or postdocs, it is also a bit of a trial by fire, since reviewing skills are usually assumed to be gained osmotically, rather than through any specific training. Unfortunately, the reviewing system seems ever more complicated for reviewers and authors alike (slow, poor quality, unpredictable). Concerns about modern reviewing pop up every few months, and different solutions to the difficulties of finding qualified reviewers and the quality of modern reviews (including publishing an instructional guide, taking alternative approaches (PeerJ, etc), or skipping peer review altogether (arXiv)). Still, in the absence of a systematic overhaul of the peer review system, an opinion piece in The Scientist by Matthew A. Mulvey and Dean Tantin provides a rather useful guide for new reviewers and a useful reminder for experienced reviewers. If you are going to do a review (and you should, if you are publishing papers), you should do it well. 
From "An Ecclesiastical Approach to Peer Review" 
"The Golden Rule
Be civil and polite in all your dealings with authors, other reviewers, editors, and so on, even if it is never reciprocated.
As a publishing scientist, you will note that most reviewers break at least a few of the rules that follow. Sometimes that is OK—as reviewers often fail to note, there is more than one way to skin a cat. As an author you will at times feel frustrated by reviews that come across as unnecessarily harsh, nitpicky, or flat-out wrong. Despite the temptation, as a reviewer, never take your frustrations out on others. We call it the “scientific community” for a reason. There is always a chance that you will be rewarded in the long run. 
The Cardinal Rule
If you had to publish your review, would you be comfortable doing so? What if you had to sign it? If the answer to either question is no, start over. (That said, do not make editorial decisions in the written comments to the authors. The decision on suitability is the editors’, not yours. Your task is to provide a balanced assessment of the work in question.) 
The Seven Deadly Sins of sub-par reviews
  1. Laundry lists of things the reviewer would have liked to see, but have little bearing on the conclusions.
  2. Itemizations of styles or approaches the reviewer would have used if they were the author.
  3. Direct statements of suitability for publication in Journal X (leave that to the editor).
  4. Vague criticism without specifics as to what, exactly, is being recommended. Specific points are important—especially if the manuscript is rejected.
  5. Unclear recommendations, with little sense of priority (what must be done, what would be nice to have but is not required, and what is just a matter of curiosity).
  6. Haphazard, grammatically poor writing. This suggests that the reviewer hasn’t bothered to put in much effort.
  7. Belligerent or dismissive language. This suggests a hidden agenda. (Back to The Golden Rule: do not abuse the single-blind peer review system in order to exact revenge or waylay a competitor.) 
Vow silence
The information you read is confidential. Don’t mention it in public forums. The consequences to the authors are dire if someone you inform uses the information to gain a competitive advantage in their research. Obviously, don’t use the findings to further your own work (once published, however, they are fair game). Never contact the authors directly.
Be timely
Unless otherwise stated, provide a review within three weeks of receiving a manuscript. This old standard has been eroded in recent years, but nevertheless you should try to stick to this deadline if possible. 
Be thorough
Read the manuscript thoroughly. Conduct any necessary background research. Remember that you have someone’s fate in your hands, so it is not OK to skip over something without attempting to understand it completely. Even if the paper is terrible and in your view has no hope of acceptance, it is your professional duty to develop a complete and constructive review.
Be honest
If there is a technique employed that is beyond your area of expertise, do the best you can, and state to the editor (or in some cases, in your review) that although outside your area, the data look convincing (or if not, explain why). The editor will know to rely more on the other reviewers for this specific item. If the editor has done his or her job correctly, at least one of the other reviewers will have the needed expertise.
Testify
Most manuscript reviews cover about a page or two. Begin writing by briefly summarizing the state of the field and the intended contribution of the study. Outline any major deficits, but refrain from indicating if you think they preclude publication. Keep in mind that most journals employ copy editors, so unless the language completely obstructs understanding, don’t bother criticizing the English. Go on to itemize any additional defects in the manuscript. Don’t just criticize: saying that X is a weakness is not the same as saying the authors should address weakness X by providing additional supporting data. Be clear and provide no loopholes. Keep in mind that you are not an author. No one should care how you would have done things differently in a perfect world. If you think it helpful, provide additional suggestions as minor comments—the editor will understand that the authors are not bound to them.
Judgment Day
Make a decision as to the suitability of the manuscript for the specific journal in question, keeping in mind their expectations. Is it acceptable in its current state? Would a reasonable number of experiments performed in a reasonable amount of time make it so, or not? Answering these questions will allow you to recommend acceptance, rejection, or major/minor revision. 
If the journal allows separate comments to the editor, here is the place to state that in your opinion they should accept and publish the paper as quickly as possible, or that the manuscript falls far below what would be expected for Journal X, or that Y must absolutely be completed to make the manuscript publishable, or that if Z is done you are willing to have it accepted without seeing it again. Good comments here can make the editor’s job easier. The availability of separate comments to the editor does not mean that you should provide only positive comments in the written review and reserve the negative ones for the editor. This approach can result in a rejected manuscript being returned to the authors with glowing reviewer comments. 
Resurrection
A second review is not the same as an initial review. There is rarely any good reason why you should not be able to turn it around in a few days—you are already familiar with the manuscript. Add no new issues—doing so would be the equivalent of tripping someone in a race during the home stretch. Determine whether the authors have adequately addressed your criticisms (and those of the other reviewers, if there was something you missed in the initial review that you think is vital). In some cases, data added to a revised manuscript may raise new questions or concerns, but ask yourself if they really matter before bringing them up in your review. Be willing to give a little if the authors have made reasonable accommodation. Make a decision: up or down. Relay it to the editor. 
Congratulations. You’ve now been baptized, confirmed, and anointed a professional manuscript reviewer."