ICON Blog

The International Council on Nanotechnology

Showing posts with label peer review. Show all posts
Showing posts with label peer review. Show all posts

Why Don't Scientists Submit Post-Peer-Review Comments?

When we were setting up the rating system at the Virtual Journal of Nano-EHS there was much hand-wringing about what such a system would do to the credibility of our organization and to academic discourse in general. Many within our advisory group hoped such a system would allow non-experts to get a better sense of the expert community's opinions about the quality of papers in this new field, which has been recognized to be somewhat uneven. But some prominent academics passionately argued that opening up the vast database to user comments would devolve into the kind of petty mudslinging, anonymous attacks and overall lack of civility one can find on other sites where public comments are permitted.

It turns out neither group has seen its hopes or fears realized. In the nearly 9 months since we implemented a system wherein one can rate a paper between 1-5 stars and provide a comment as an option, 34 ratings have been submitted on 33 papers in a database that now includes over 3800 papers. Nineteen of those ratings had comments attached. The ICON database is by no means unique in the under-usage of its rating and commenting functions.

This analysis of the usage of public commenting functions at three major scientific repositories, Public Library of Science (PLoS), BioMed Central (BMC) and BMJ, found that whereas commenting is widespread in newspaper articles, blogs, consumer websites and many other internet sites, scientists don't seem all that interested in commenting on scientific publications. The promised followup post sharing insights into why this might be has not yet been published but commenters to the original analysis shared some of their thoughts. Among the reasons cited were the disconnect between how scientists read papers (saved pdfs) vs. where the comments reside (online), the availability of other social networking tools for indicating approval or disapproval such as FriendFeed and Digg; and even the inherent flaws in rating processes.

In looking through the ratings at our site, I am gratified to see that the people who chose to leave comments for the most part provided brief but specific analyses of the merits or shortcomings of the rated paper. There appears to be no pent-up desire among the nano-EHS community to abuse our forum in inappropriate ways. But is there an unmet need for people to assess nano-EHS papers post-peer-review? If so, what other mechanisms should we consider employing? Feedback is welcome.

[Hat tip to @materialsdave for retweeting @solidstateux on the blog posting that prodded me to write this.]

Too much data, too little context

Those of us who have been working in nanotechnology since the beginning of the decade have witnessed the remarkable growth and evolution of research into engineered nanomaterials' environmental, health and safety impacts. In 2001, there were virtually no papers addressing the impacts of intentionally manufactured nanomaterials.

Fast forward to now. This graph shows the explosive growth of research papers covering aspects of nano-EHS between 2001 and 2008. In a few short years we've gone from no data to, one could argue, too much data. Too much data, you say? Then explain why every newspaper article and policy report I read on the subject ends up saying basically the same thing: we still don't know enough about engineered nanomaterials to quantify risks.

The reasons are myriad and include the slow development and acceptance of standards for toxicity testing, materials characterization and even terminology; the dearth of validated protocols for testing; and other ripples of the culture clash that ensued when materials scientists, aerosol physicists, environmental engineers, and toxicologists all started to learn to collaborate.

People who have witnessed the emergence of other interdisciplinary fields of inquiry could have told us it would take some time to work out and then propagate the best research practices. But there seems to be a special urgency to nano-EHS research as governments, NGOs, companies, attorneys and other interested parties grapple with how this body of data should be used to inform decision-making. The various "solutions to the nano-EHS issue" being bandied about, including regulation, insurance policies, voluntary codes, risk markets, etc., all rely upon good quality data that is interpreted correctly. Journalists need to get a feel for what a reasonable community of experts thinks about this or that new paper that demonstrated the hazards of a particular nanomaterial in a particular laboratory experiment.

In short, context and analysis are critical.

Once upon a time, ICON thought it could provide this context and, indeed, we've produced a few backgrounders that review and analyze hot topics in nano-EHS. But this function is best performed by the community at large, those of you who are also wrestling with questions about choice of medium, dose, exposure route, particle sizing technique and other minutiae of life in the lab.

Starting this week, the ICON Virtual Journal aims to provide you the opportunity to shape future nano-EHS research practice by commenting on papers in our database. Despite an overwhelmingly positive response to this idea from people we surveyed during the conception and development phase, there remains some discomfort with the idea of people passing judgment on papers that have already passed through peer review. (Because we all know the peer review process is perfect.) Here are the top two reasons your peers gave for wanting this rating system:
  • Papers of high quality should be recognized so they can serve as models for other researchers in this field
  • This will help journalists, the public and other lay audiences know which research is the best, which will inform the public dialogue over nano’s risks and benefits
Now what could be wrong with recognizing papers of outstanding quality so that the field as a whole gravitates toward the best practices and people on the outside understand the implications of new work? Yes, yes, we still have issues to work out with respect to standards, etc. but in addition to giving a snapshot of where the field is now, the ratings could actually advance the discussion of best practices and broaden it to include underrepresented voices. "You may say I'm a dreamer. But I'm not the only one."

Unless you're a troll, I invite you to register at the site and rate 5 papers with which you are very familiar. Then email 10 colleagues and encourage them to do the same. (Start with the cranky ones who are always griping to you about the @#^% that gets published these days in the vanity journals.) Choose a non-identifying username if you want (students seeking future employment) or publish under your own name as a way of demonstrating how smart and thoughtful you are (consultants, tenured professors). Either way, we'll be able to pull inappropriate content off the site as needed.