ICON Blog

The International Council on Nanotechnology

Why Don't Scientists Submit Post-Peer-Review Comments?

When we were setting up the rating system at the Virtual Journal of Nano-EHS there was much hand-wringing about what such a system would do to the credibility of our organization and to academic discourse in general. Many within our advisory group hoped such a system would allow non-experts to get a better sense of the expert community's opinions about the quality of papers in this new field, which has been recognized to be somewhat uneven. But some prominent academics passionately argued that opening up the vast database to user comments would devolve into the kind of petty mudslinging, anonymous attacks and overall lack of civility one can find on other sites where public comments are permitted.

It turns out neither group has seen its hopes or fears realized. In the nearly 9 months since we implemented a system wherein one can rate a paper between 1-5 stars and provide a comment as an option, 34 ratings have been submitted on 33 papers in a database that now includes over 3800 papers. Nineteen of those ratings had comments attached. The ICON database is by no means unique in the under-usage of its rating and commenting functions.

This analysis of the usage of public commenting functions at three major scientific repositories, Public Library of Science (PLoS), BioMed Central (BMC) and BMJ, found that whereas commenting is widespread in newspaper articles, blogs, consumer websites and many other internet sites, scientists don't seem all that interested in commenting on scientific publications. The promised followup post sharing insights into why this might be has not yet been published but commenters to the original analysis shared some of their thoughts. Among the reasons cited were the disconnect between how scientists read papers (saved pdfs) vs. where the comments reside (online), the availability of other social networking tools for indicating approval or disapproval such as FriendFeed and Digg; and even the inherent flaws in rating processes.

In looking through the ratings at our site, I am gratified to see that the people who chose to leave comments for the most part provided brief but specific analyses of the merits or shortcomings of the rated paper. There appears to be no pent-up desire among the nano-EHS community to abuse our forum in inappropriate ways. But is there an unmet need for people to assess nano-EHS papers post-peer-review? If so, what other mechanisms should we consider employing? Feedback is welcome.

[Hat tip to @materialsdave for retweeting @solidstateux on the blog posting that prodded me to write this.]

New GoodNanoGuide Slideshow

Check out the latest SlideShare Presentation on the GoodNanoGuide.