Posts Tagged ‘peerreview’
- EduPunk Repositories: If you don’t have access to an institutional or subject repository you can self-archive in, here’s a review of some alternatives.
- The evolution of scientific impact: PLoS’s article-based metrics rely on user comments on articles, but it has been difficult to persuade scientists to comment on each other’s articles on the Internet.
- Sustainability of OA archives: What if the archive you depend on disappears for lack of funding? ‘If Cornell can’t underwrite arXiv, arguably the most successful preprint archive ever, what does that mean for disciplinary repositories generally?’
- P2P U., an Experiment in Free Online Education, Opens for Business: ‘A group of professors and graduate students from around the world has started a new university of their own online, with an unusual model that is more like a book group than a traditional course.’
- When the “Wiki Way” = Poor Quality: Why ‘the distributed, “Wikipedia model” of content production does not work for textbooks’.
- PLoS Mulls Hosting Software amid Growing Crossover between Informatics and Publishing: ‘the team is ironing out details, such as whether to create a repository like SourceForge. . . .’ Maybe someone will notice that unlike PLoS, SourceForge doesn’t charge anything to let you contribute to the projects it hosts, or to start your own project there.
- E-textbook Mania Strikes Higher Ed: ‘truly open access textbooks offer a model that in the long run best serves faculty and their students. . . . Students are far more interested in the textbook crisis than the journal crisis.’
- Open-source textbook co. Flat World goes back to school with 40,000 new customers: The company makes money by selling customer service, printed textbooks and audiobooks; the electronic versions are free. Sounds like Red Hat.
In this fascinating and quite overwhelming study published in BMJ, Steven A Greenberg, associate professor of neurology, argues that in the citation network that was the object of his research, backdoor invention affected negatively the quality of knowledge produce, since it: “repeated misrepresentation of abstracts as peer reviewed papers to fool readers into believing that claims are based on peer reviewed published methods and data” . Although author clearly states that it is through abstracts, which in his view is a way to avoid peer reviewing, that backdoor is introduced, my conclusion is that on the basis of this research, we can say that peer reviewing in medical science is capable of significantly contributing to the creation of a large unfounded authority. Here’s authors conlusion paragraph in full:
“Citation is both an impartial scholarly method and a powerful form of social communication. Through distortions in its social use that include bias, amplification, and invention, citation can be used to generate information cascades resulting in unfounded authority of claims. Construction and analysis of a claim specific citation network may clarify the nature of a published belief system and expose distorted methods of social citation.”
He develops a vocabulary of citation distortions, and backdoor is the method how peer review is bypassed:
“In another form of invention, claims are introduced as fact through a “back door” that bypasses peer review and publication of methods and data. This is accomplished by repeated misrepresentation of abstracts as papers (seven different papers, 17 citations to 12 different misrepresented abstracts; […] Back door invention—repeated misrepresentation of abstracts as peer reviewed papers to fool readers into believing that claims are based on peer reviewed published methods and data”
In my reading of his text, the author confirms the responsibility of the peer reviewing implicitly when he states that a medical claim (see his paper for specific claim used here) “is supported in this manner and accepted by peers as fact”. In other words, peers accepted as fact a claim that slipped through peer reviews, multiple times, in multiple papers. The graph on the right shows that “Only one of 32 citations flows to papers70 71 72 73 77 78 that present data that conflict with the validity of these models” – models that were the basis of knowledge claims supported as the fact by all the papers on the right of the graph in the blue. It’s a complex analysis, and it’s possible that i misunderstood. I emailed the author and asked for his view on my understanding that he let peer review too lightly off the hook. Here’s the full paper How citation distortions create unfounded authority: analysis of a citation network.
From an interview with Bora Živković, the online Community Manager of the Open Access Journal PLoS One, answering the question will Open access change the way science is done, he says:
I think it is inevitable that we get there, the only thing I do not say is how fast, because there will be a lot of resistance, because scientist are reall very conservative and risk-averse in changing the system. But there are pioneers that are going to lead the way. They are going to get us to that point where research is put directly online in real time. There will be no such things as journals any more, only platforms for self-publishing, where massive peer-review is going on in real time. What’s going to happen is the evolution of a system that assigns reputation to individuals depending on their contribution to the process. That is the key, I think. Once you can gain scientific reputation by your online contribution, theoretical work, commenting, or peer-reviewing others, then people will participate
He predicts that scientific papers will evolve into collections consisting of parts from various scientific sources, in which people with talent for synthesis will emerge as those who monitor ongoing open research and write review papers. Read the full interview here.
- An interview with Pat Brown, one of the founders of the Public Library of Science: ‘I want to LITERALLY overthrow the scientific publishing establishment. . . . PLoS is just part of a longer range plan. The idea is to completely change the way the whole system works for scientific communication.’
- UK’s 24th Green Open Access Mandate, Planet’s 92nd: Coventry University
- expressive processing: an experiment in blog-based peer review
- The 100 Best Open Education Resources on the Web: plenty of lectures (in certain fields), but where are the open textbooks?
- Preprint repositories for the humanities: where are they?
Since Toni and I were getting into a discussion about the merits and flaws of peer review, I did a quick search to get a sense of the research that’s been done on the subject. Robergs (2003) attempts to summarise previous studies. He points out that peer review is a recent phenomenon in science:
Although evidence exists for some journals to have adopted a peer review system prior to the 20th century, other journals such as The Journal of the American Medical Association (JAMA) sought external opinion on manuscripts ‘only rarely’ through to the 1950s.
One reason for this was ‘the shortage of manuscripts for publication’. This situation changed drastically after World War II: ‘Journal editors experienced a transition of too few manuscripts to too many’:
the peer review system was not adopted for its ability to improve manuscript content and validity. Rather, the system was adopted, at least equally, as an answer to the realities of scientific publication where the volume of submissions out-stripped the resources of journals and professional organizations.
The implication seems to be that if scientific journals previously managed to select good articles without peer review, peer review is unnecessary. But it seems to me that, assuming that journal editors were reading the articles themselves in those days and were competent to do so, they were in effect using peer review.
Robergs notes that reviewers rarely agree on the quality of an article: ‘the available data indicate that there is minimal consensus in peer review between multiple reviewers’. Moreover, ‘blind’ review processes (which are supposed to maintain the anonymity of both authors and reviewers) are ineffective when the authors are well-known: ‘existing data indicate that most reviewers (75%) can detect the identity of a recognized researcher of a given topic’.
Armstrong (1996) provides some evidence indicating that peer review discourages the publication of ideas that challenge the conventional wisdom of the field.
One of the comments on Armstrong’s article mentions a hilarious study by Peters and Ceci (1982) in which the authors selected twelve articles that had been written by researchers at prestigious institutions and recently published in prestigious peer-reviewed psychology journals with non-blind refereeing practices, and simply re-submitted these articles to the same journals that had published them. The only change they made to the articles was to substitute fictitious authors’ names and institutions for the real ones. Only three of the resubmissions were detected; the remaining nine were reviewed and rejected, usually for ‘serious methodological flaws’. This seems to suggest that there’s some value in blind refereeing after all.
Responding to Armstrong, Miser (1998) points out a crucial problem with all attempts to critique peer review: there is no explicit model of the editorial process.
sometimes it appears that the editor of a journal is in charge, sometimes as though he were taking orders from his referees and associate editors and just publishing what they tell him to.
Since nobody has produced a scientific account of how editorial processes really work, all discussions of peer review are based on assumptions about those processes rather than real knowledge.
The proceedings of the recent International Symposium on Peer Reviewing are now available, but I haven’t looked at them yet.
Open Journal System seems to be the best available free software solution for running a journal. Their demo is quite extensive and allows access to most of its functionality. Culture Machine is an example of an OJ journal in the UK with support for collaborative writing and sharing in academia in general.
My biggest problem with it is that although its peer reviewing options are good for the currently existing model used in academia, it doesn’t seem to be modular enough to allow the kind of open peer reviewing i’ll be proposing to journals. In other words, it maps onto the existing workflow of journals, while what i’m looking for is a web publishing system that will assist in innovating in the models of reviewing and collaboration in general. In addition to reviewing, i believe we need web tools for dynamic, open relationships within editorial collectives, and for changing the idea of printed journal (more on this is another post). Feature request is always as option. I’ll write a review of the OJS from the perspective of these open and dynamics models of reviewing, co-editing and publishing that are on my mind as a logical next step in knowledge production.