Archive for the ‘Quick Research’ Category
In another excellent post, Publishing science on the web, John Wilbanks reacts to discussions he had with British Library staff and leaders, who told him that:
Publishers frequently claim four functions: registration (when was an idea stated?), certification (is the idea original, has it been “proved” to satisfactory peer review?), dissemination (delivery), and preservation of the record. The journal thus provides for both the claiming of ideas by scientists and for the “memory” of the sciences.
He reponds by saying that the Web already does a lot of this, outside of journals and existing scientific and publishing mechanisms: “Wikis and blogs provide almost costless registration and dissemination of new scientific communication.” However, resistance to integration of the Web into science is strong. Referring to science as an inefficient wiki, he states that, in the current model of scientific production “the incremental edits are made in papers instead of wikispace, and significant effort is expended to recapitulate the existing knowledge in a paper in order to support the one-to-three new assertions made in any one paper”. Another major problem he underlines, one that i find hugely problematic in my own work, is that each disciplines has its own highly specialist language through which it operates. This is a problem for creating scientific mashups:
Right now the problem is we still think about cross disciplinarity as a function of people choosing to work together. But the internet and the Web give us a different model. What’s more cross disciplinary than Google?
Although we use Google daily in our research, and it leads us across scientific fields, the obstacles are still both the language barrier between fields and “lack of knowledge interoperability at the machine level”.
John Wilbank in his recent blog post Integrate, Annotate, Federate makes some key points which i entirely share and work on. That is, Open Access is only the beginning. The most important work, on rewiring sciences, is about to start happening. And, like Wilbanks suggests, we have to loudly ask for it, work on it. It’s not going to happen “naturally”, we have to make it happen. Citations, as the only way of integrating scientific works, are not any more the only possible method, that has to change. Here’s his description of one of those principles:
Annotation is the second new essential function. The old method of annotation is through either writing a new paper that validates, invalidates, extends, or otherwise affects the assertions made in an old paper. Or if something is really wrong, there might be a letter to the editor or a retraction. In a wiki world, this is fundamentally insane. The paper is a snapshot of years of incremental knowledge progress. We have much better technology to use than dead trees.
Other than publishers providing annotation, Wilbanks suggests we need to move on and: “create an open platform that actually tracks the kind of annotation-relationships that the web enables”. Extending bloggers’ use of trackback, we should extend those protocols to connect articles, wiki pages, database entries … which would make explicit, visible and trackable already existing links. A logical move would be to either implement pingback protocol in major software applications that are key for the other types of material. Or, if required to achieve that, to extend the protocol to support it.
Learned society members and open access: ‘Abstract: The individual members of 35 UK learned societies were surveyed on their attitudes to open access (OA); 1,368 responses were received. Most respondents said they knew what OA was, and supported the idea of OA journals. However, although 60% said that they read OA journals and 25% that they published in them, in both cases around one-third of the journals named were not OA. While many were in favour of increased access through OA journals, concerns were expressed about the cost to authors, possible reduction in quality, and negative impact on existing journals, publishers, and societies. By contrast, less than half knew what self-archiving was; 36% thought it was a good idea and 50% were unsure. Just under half said they used repositories of self-archived articles, but 13% of references were not in fact to self-archiving repositories. 29% said they self-archived their own articles, but 10% of references were not to publicly accessible sites of any kind. The access and convenience of self-archiving repositories were seen as positive, but there were concerns about quality control, workload for authors and institutions, chaotic proliferation of versions, and potential damage to existing journals, publishers, and societies.’
Professor of Law and Economics Steven Shavell argues that it would be socially beneficial to abolish Copyright of academic works, but that the actual cost of publishing has to be paid by someone other than private publishing companies upfront, by universities he suggests, and that such costs are quite high:
The Costs of Learned Journal and Book Publishing, A Benchmarking Study for ALPSP, Dryburgh Assoc., Ltd, September, 2002, at 17, reports from a survey that the total first copy cost of an academic book is £7,391 (54% of which is for copyediting and typesetting) … at 62 that “the upfront costs for publishing a monograph are . . . from about $20,000 on the low end to many multiples of that . . . .” in Sanford G. Thatcher, From the University Presses – The Hidden Digital Revolution in Scholarly Publishing: POD, SRDP, the “Long Tail,” and Open Access, Against the Grain, April 2009, http://www.against-the-grain.com (last visited June 30, 2009). Harvard University Press suggested in a conversation that the average first-copy cost per page is about $50, implying a first copy cost of $15,000 for a book of 300 pages. Telephone interview with personnel, Harvard University Press, in Cambridge, MA. (May 28, 2009). Also, it is reported that the copy editing costs of a page of an article average $85 – see page 258, Table 51, Tenopir and King supra note 5 – suggesting that copy editing costs of a book of 300 pages would be over $20,000.
How about we don’t do that at all (use university funds to pay for publishing). First, i don’t believe these numbers are true, they actually make no economic sense at all. For example, Garry Hall states that average monograph in humanities, applies to social science as well as far as i know, sells 200-600 copies (i’ll take 450 here as median). If this was true, and if each sold copy was £30 (middle between £50 hardbacks and £20 softbacks), and publisher got half of it (amazon takes 50%), we are left with average gross of £15 x 450 = 6750. Why would then private publishers care to publish academic monographs if returns are so low, and if profit hardly can be made? These numbers make no sense, at least not in social sciences and humanities. They either sell more on average, or the initial costs are lower. I’m inclined to believe that the initial costs are lower. Or, perhaps the median sales numbers in other academic fields are higher.
It’s a paper well covered by lot of other research, so let’s say my quick and dirty calculation is mistaken and number do add up and apply universally across most of academia, Shavell makes the most intriguing economic argument of the paper:
On the whole, the amount that universities would save could exceed
the amount they would pay in publication fees, for the subscription and new book prices now paid cover publisher costs and profits, whereas the publication fees would cover only publisher costs. That is, university expenditures on publication fees could be less in a world without copyright than their expenditures today on subscriptions and book purchases, because universities would no longer be financing publisher profits from academic works.
In the final section, author suggests that Open Access is facing difficulties (reputable journals and publishers refusing to embrace OA, and overall it is spreading too slow) that might not be overcome. His suggestions is introduction of a new law that abolishes copyright for academic works. He goes on to suggest how to identify an academic work. He also believes that OA journals are currently of low quality, but he explains it with the lack of time to develop stating that quality difference is expected to be reduced over time.
It’s a long paper with claims densely backed up with other research, hence a much more detailed reading is required. Anyhow, exciting research. It’s also a pointer how more of better economically backed up research is required for Open Access arguments for monographs to be stronger. Research that crucially includes the overall picture, which, like this paper does, must include library budgets.
Since Toni and I were getting into a discussion about the merits and flaws of peer review, I did a quick search to get a sense of the research that’s been done on the subject. Robergs (2003) attempts to summarise previous studies. He points out that peer review is a recent phenomenon in science:
Although evidence exists for some journals to have adopted a peer review system prior to the 20th century, other journals such as The Journal of the American Medical Association (JAMA) sought external opinion on manuscripts ‘only rarely’ through to the 1950s.
One reason for this was ‘the shortage of manuscripts for publication’. This situation changed drastically after World War II: ‘Journal editors experienced a transition of too few manuscripts to too many’:
the peer review system was not adopted for its ability to improve manuscript content and validity. Rather, the system was adopted, at least equally, as an answer to the realities of scientific publication where the volume of submissions out-stripped the resources of journals and professional organizations.
The implication seems to be that if scientific journals previously managed to select good articles without peer review, peer review is unnecessary. But it seems to me that, assuming that journal editors were reading the articles themselves in those days and were competent to do so, they were in effect using peer review.
Robergs notes that reviewers rarely agree on the quality of an article: ‘the available data indicate that there is minimal consensus in peer review between multiple reviewers’. Moreover, ‘blind’ review processes (which are supposed to maintain the anonymity of both authors and reviewers) are ineffective when the authors are well-known: ‘existing data indicate that most reviewers (75%) can detect the identity of a recognized researcher of a given topic’.
Armstrong (1996) provides some evidence indicating that peer review discourages the publication of ideas that challenge the conventional wisdom of the field.
One of the comments on Armstrong’s article mentions a hilarious study by Peters and Ceci (1982) in which the authors selected twelve articles that had been written by researchers at prestigious institutions and recently published in prestigious peer-reviewed psychology journals with non-blind refereeing practices, and simply re-submitted these articles to the same journals that had published them. The only change they made to the articles was to substitute fictitious authors’ names and institutions for the real ones. Only three of the resubmissions were detected; the remaining nine were reviewed and rejected, usually for ‘serious methodological flaws’. This seems to suggest that there’s some value in blind refereeing after all.
Responding to Armstrong, Miser (1998) points out a crucial problem with all attempts to critique peer review: there is no explicit model of the editorial process.
sometimes it appears that the editor of a journal is in charge, sometimes as though he were taking orders from his referees and associate editors and just publishing what they tell him to.
Since nobody has produced a scientific account of how editorial processes really work, all discussions of peer review are based on assumptions about those processes rather than real knowledge.
The proceedings of the recent International Symposium on Peer Reviewing are now available, but I haven’t looked at them yet.