Grassroots Review Journals assess the quality of scientific articles & finished manuscripts. Because we only review, we are not limited by copy rights & people do not have to submit their articles to us for it to work.
We aim to be a valuable entry into the literature & to destroy the power of the publishers.
We need coders, editors & messengers.
15/15. A 1st glimpse of the concept can be seen in the first Grassroots review journal on my own field of study. https://homogenisation.grassroots.is
I think it shows the added value of these open reviews for our community. The reviews help understand the article, its strengths & weaknesses, how it fits in the literature. The classifications & the quantitative assessments help prioritize reading.
Feedback on this & any other ideas would help a lot. Please reply or mail me at Victor.Venema@grassroots.is
14. Help toppling Elsevier & Co. is welcome. 🎉
This is fun. Everyone grab one monopoly, call your friends & take it down. 😊
To get the first version of the review system going and be able to show how it works, we mostly need WordPress expertise. The code is on GitHub. https://github.com/grassrootsjournals
Also designers, editors/groups starting journals, ambassadors looking for partners, community managers, etc. are needed. Feedback on the system (how would it fit your field of study?) is very valuable.
The review system could also ingest many information sources that help the reviewers & help the readers to build on the article: related code, data, protocols, retractions (of cited work),, etc. My talk gives a long incomplete list of possible integrations. https://zenodo.org/record/3923961
We need communication standards: https://www.openaire.eu/openaire-research-graph-open-for-comments
12. The 2nd step is to have multiple servers running this software, which can exchange single reviews and whole review journals.
Sharing reviews helps as articles can be important for several fields.
Being able to copy a journal makes it easier to start a new one. So if a scientific community is not happy with how their journal is run, they can quickly start a new one, on a new domain, editing the problems of the old one. That this is possible will hopefully mean it is hardly ever used.
10. Once the Grassroots system is accepted the authors could determine themselves when their article is considered published.
The pre-publication reviewer becomes a helpful colleague. If the authors ignore good suggestions and press ahead with publication too early, their studies will get worse grades in the post-publication assessment.
This makes the pre-publication review collaborative. No need for anonymity. This gives more accountability & credit. See my blog post: https://www.openuphub.eu/community/blog/item/separation-of-review-powers-into-feedback-and-importance-assessment-could-radically-improve-peer-review
9. Because the aim is to replace the current system & break the power of the publishers *all* articles need to be assessed.
There are so-called #OverlayJournals that also perform open reviews, but only review manuscripts from repositories. That is a fine end state, but will not break publisher power because funding agencies & universities forced by rankings will want to be credited for all studies.
That is also why the Grassroots assessments are quantitative, even if I like qualitative more.
8. The more the Grassroots review system is accepted, the less important it is where an article is published. But as long as it is not, the system does not force authors to publish in places that may hurt their career; this makes the transition easier.
Authors can more and more select journals that provide good service for a reasonable price. In the end, submitting to a repository or a journal will be equivalent. But many will be happy to pay for quality copy editing, visibility, lay-out, etc.
7. It takes years to make an #OpenAccess journal reputable. So it is better, authors at least have some journals to chose from and can consider price and quality of service, but there is still not much competition and reputable journals have started asking authors to pay large fees.
While subscriptions block poor researchers as readers, Open Access limits their access to the scientific literature as authors. We need to get those prices under control.
6. The legacy publishers block the transition from a subscription model to #OpenAccess publishing where everyone could read scientific articles.
Copyright gives them a full monopoly and the subscription model makes is hard to start new competing journals. Less competition means more profits, so the publishers really like this model and fight Open Access as much as they can.
5. An average scientific article costs on average about 2000$, while a few hundred would cover the real costs. The service of the publishers is terrible: submitting a manuscript a lot of work. Downloading an article or reference somehow always complicated.
WELL HIDDEN LINKS TO THE PDF. 😡
NO I DO NOT WANT SOME ESOTERIC VERSION! 😡
NO I DO NOT WANT YOUR STUPID READCUBE! 😡
GIVE ME THE F*#$ING ARTICLE!!! 😡 😡 😡
4. A main aim of this new review system is to break the market power of the scientific publishers. Their market power is most clearly demonstrated by the profit margins of science publisher Elsevier, who make profits of 30% to 50% year after year. https://www.theguardian.com/science/political-science/2018/jun/29/elsevier-are-corrupting-open-science-in-europe
In an efficient market this would not be possible.
Quickly building monopolies is the business model of Silicon Valley. These companies are idolized, but they are part of the problem. Find their weak spot and break them up. 😊
3. The Grassroots assessment is made after publication of the article (post-publication peer review; PPPR).
This is better than the current review before publication because it takes time to digest new science and afterwards the assessment can be made by experts who used the results.
Scientific articles are published when we are still grappling with understanding a problem. Later it becomes much easier to see which studies have contributed.
PPPR incentives publishing work of lasting value.
2. Currently researchers and scientific journals are basically rewarded based on the number of citations. This sets bad incentives, because citations correlate badly with contribution to science or society, nor to technical quality.
We will explicitly assess these 3 factors:
1) Contribution to scientific progress.
2) Importance for society.
3) Technical quality.
1) is for scientists, to prioritize their reading. 2) is for funding agencies. 3) is for hiring good scientists.
Just in case you are also there, the Twitter thread is here:
Remember it is peer review week this week. This Google Doc shows you all activities. Take your time, there is a lot going on.
I am thinking of an #OpenScience project & could use some help.
There is a database with information that is important for science, which has a well-documented REST API.
How do you make a hashtag out of that? #rESTaPI?
An important data element is often missing. So I would like to make a 2nd server that serves the same API & adds this element if it is missing.
What is good software to do this? Has anyone here done something similar & would be happy to help (funded)?
Boots welcome. 🤗
Passenger Pigeon Manifesto. My unrefined title: Call for Open GLAM (galleries, libraries, archives, and museums) to join open science, open source and Wikipedia.
Peer Review Week 2020 is next week (September 21 – 25), dedicated to the theme of “Trust in Peer Review". Still open to contributions.
Motivations for performing scholarly prepublication peer review: A scoping review (pay-walled)
Let's bring the quality control of scientific articles back to the scientific community with open post-publication peer review independent of scientific journals
Fediscience is the social network for scientists.