Grassroots Review Journals assess the quality of scientific articles & finished manuscripts. Because we only review, we are not limited by copy rights & people do not have to submit their articles to us for it to work.
We aim to be a valuable entry into the literature & to destroy the power of the publishers.
We need coders, editors & messengers.
The free to register Joint Roadmap for #OpenScience Tools (JROST) Conference is December 14-16th.
Survey of 322 editors of journals in ecology, economics, medicine, physics and psychology. 2% Open Peer Review. 91% of editors found altering a review report at least sometimes appropriate. A majority supported co-reviewing and reviewers requesting access to data.
A potential problem of post-publication peer review is that it is not double blind. This study for two neuroscience journals found, however, that double blind review did not change the treatment of women much (p=0.06), while there is a big difference in women being invited to author articles.
CrowdPeer, a new open peer review system for preprints. Aims to also get positive comments.
Scientists, Publishers Debate Paychecks for Peer Reviewers
Science reports on a survey about editors altering review reports. The authors call this stigmatized dubious behavior. I see reviews as advice to the editor, who should be an expert and not forward bad advice. We need to talk.
Is open peer review, a growing trend in scholarly publishing, a double-edged sword?
A study of 171 journals finds: 32% no information on the type of peer review. Whether preprints can be posted is unclear in 39%. 75% of journals have no clear policy on co-reviewing, citation of preprints, and publication of reviewer identities.
As we advocate publishing null results, we had to post this: A randomised trial of an editorial intervention to reduce spin in the abstract's conclusion [a short instruction alongside the review reports] showed no significant effect.
Knowledge Infrastructures and Digital Governance workshop. (summary, talks and slides)
How often do leading biomedical journals use statistical experts? 34% rarely or never use specialized statistical review, 34% used it for 10-50% of their articles and 23% used it for all articles (n=107). These numbers have changed little since 1998.
What do Chinese researchers think about the peer review process?
15/15. A 1st glimpse of the concept can be seen in the first Grassroots review journal on my own field of study. https://homogenisation.grassroots.is
I think it shows the added value of these open reviews for our community. The reviews help understand the article, its strengths & weaknesses, how it fits in the literature. The classifications & the quantitative assessments help prioritize reading.
Feedback on this & any other ideas would help a lot. Please reply or mail me at Victor.Venema@grassroots.is
14. Help toppling Elsevier & Co. is welcome. 🎉
This is fun. Everyone grab one monopoly, call your friends & take it down. 😊
To get the first version of the review system going and be able to show how it works, we mostly need WordPress expertise. The code is on GitHub. https://github.com/grassrootsjournals
Also designers, editors/groups starting journals, ambassadors looking for partners, community managers, etc. are needed. Feedback on the system (how would it fit your field of study?) is very valuable.
The review system could also ingest many information sources that help the reviewers & help the readers to build on the article: related code, data, protocols, retractions (of cited work),, etc. My talk gives a long incomplete list of possible integrations. https://zenodo.org/record/3923961
We need communication standards: https://www.openaire.eu/openaire-research-graph-open-for-comments
12. The 2nd step is to have multiple servers running this software, which can exchange single reviews and whole review journals.
Sharing reviews helps as articles can be important for several fields.
Being able to copy a journal makes it easier to start a new one. So if a scientific community is not happy with how their journal is run, they can quickly start a new one, on a new domain, editing the problems of the old one. That this is possible will hopefully mean it is hardly ever used.
10. Once the Grassroots system is accepted the authors could determine themselves when their article is considered published.
The pre-publication reviewer becomes a helpful colleague. If the authors ignore good suggestions and press ahead with publication too early, their studies will get worse grades in the post-publication assessment.
This makes the pre-publication review collaborative. No need for anonymity. This gives more accountability & credit. See my blog post: https://www.openuphub.eu/community/blog/item/separation-of-review-powers-into-feedback-and-importance-assessment-could-radically-improve-peer-review
9. Because the aim is to replace the current system & break the power of the publishers *all* articles need to be assessed.
There are so-called #OverlayJournals that also perform open reviews, but only review manuscripts from repositories. That is a fine end state, but will not break publisher power because funding agencies & universities forced by rankings will want to be credited for all studies.
That is also why the Grassroots assessments are quantitative, even if I like qualitative more.
8. The more the Grassroots review system is accepted, the less important it is where an article is published. But as long as it is not, the system does not force authors to publish in places that may hurt their career; this makes the transition easier.
Authors can more and more select journals that provide good service for a reasonable price. In the end, submitting to a repository or a journal will be equivalent. But many will be happy to pay for quality copy editing, visibility, lay-out, etc.
Let's bring the quality control of scientific articles back to the scientific community with open post-publication peer review independent of scientific journals
Fediscience is the social network for scientists.