Free Our Books

because books want freedom, too

Article-level metrics

with 20 comments

In a recent post on the Public Library of Science (PLoS) blog, PLoS Director of Publishing Mark Patterson argues that traditional journal ‘impact factor’ metrics, which are meant to translate citation statistics into a measurement of a journal’s influence and prestige, are no longer useful, because in practice, ‘readers tend to navigate directly to the articles that are relevant to them, regardless of the journal they were published in’.  Therefore, PLoS will no longer publicise impact factors for the journals it publishes, and has begun providing article-level metrics instead.

Bora Zivkovic, Online Discussion Expert for PLoS, stresses that PLoS does not intend ‘to reduce these metrics to a single number’, because different metrics will be relevant for different sorts of articles: if an article is intended mainly to be read by scientists, citation statistics will be more relevant, whereas if it is aimed at a wider audience, media/blog coverage numbers will be more relevant.

Responding to Patterson on the EPrints Open Access Archivangelism blog, Stevan Harnad contends that although article-level metrics are useful, it’s still important to rank journals, too, because some journals have higher standards of peer review than others.

Written by Benjamin Geer

24 July 2009 at 11:56

Posted in News

20 Responses

Subscribe to comments with RSS.

  1. PEER REVIEW STANDARDS

    When I wrote that the planet’s 25,000 peer-reviewed journals differ not only in their subject matter, but (within each subject area) also in their quality levels, I was not particularly referring to the journal impact factor (average citations) as the means of ranking journal quality. I was referring to the fact that authors and users — the peer community — know the quality standards of journals, based on their track-records. My point was that a generic accept/reject criterion plus multiple post-publication metrics are not a substitute for this independent variation among journals in their individual degree of selectivity and the rigor of their peer review. A rich variety of post-publication metrics is a valuable supplement to a rich variety of journals and their quality levels, based on their selectivity and peer-review standards.

    Stevan Harnad

    24 July 2009 at 19:55

    • OK, I think I don’t quite understand what you’re getting at. PLoS doesn’t seem to be suggesting that journals should no longer differ in their degree of selectivity or in the rigour of their peer review. I agree with you that it’s important to know the quality of different journals and to take this into account when evaluating the articles they’ve published. But how do we determine the quality of journals? It seems to me that people tend to rely on subjective evaluations (e.g. who is on the editorial board of a given journal, and which journals publish articles by the scholars that I have the greatest respect for). Although I’m not convinced that journal impact factors as currently implemented are terribly meaningful, I think that in principle it’s a good idea to try to produce a more objective measurement of the quality standards of journals, and that more research ought to be done on this. Is that something like what you were getting at?

      Benjamin Geer

      24 July 2009 at 20:44

      • REGRESSION ON THE MEAN

        First, I think there is an idea afoot that peer review is just some sort of generic pass/fail grade for “publishability,” and that the rest is a matter of post-publication evaluation. I think this is incorrect, and represents a misunderstanding og the actual function that peer review is currently performing. It is not a 0/1, publishable/unplublishable threshold. There are many different quality levels, and they get more exacting and selective in the higher quality journals (which also have higher-quality and more exacting referees and refereeing). Users need these quality tags when they are trying to decide whether newly published work is worth taking the time to ready and making the effort and risk to try to build upon (at the quality level of their own work).

        I think both authors and users have a good idea of the quality levels of the journals in their fields — not from the journals’ impact factors, but from their content, and their track-records for content. As users, researchers read articles in their journals; as authors they write for those journals, and revise for their referees; and as referees they referee for them. They know that all journals are not equal, and that “peer-reviewed” can be done at a whole range of quality levels.

        Now you ask whether there is any substitute for this direct experience with journals (as users, authors and referees) in order to know what their peer-reviewing standards and quality level are: My reply is that there is nothing yet, and there may never be anything as accurate as having read, written and refereed for them. Metrics might eventually provide an approximation, though we don’t yet know how close, and of course they only come after publication (well after). The track record is far form infallible either, however; occasionally the usually-higher-quality journals will publish a lower-quality article, and vice versa. But on average, the quality of the current articles will correlate well with the quality of past articles. Whether judgements of quality from direct experience (as user/author/referee) will ever be matched or beaten by multiple metrics, I cannot say, but I am pretty sure they are not matched or beaten by the journal impact factor.

        And even if multiple metrics do become as good a joint predictor of journal article quality as user experience, it does not follow that peer-review can then be reduced to generic pass/fail, with the rest sorted by metrics, because (1) metrics are journal-level, not article-level (though they can also be author-level) and, more important still, (2) if journal-differences are flattened to generic peer review, entrusting the rest to metrics, then the quality of articles themselves will fall, as rigorous peer review does not just just assign articles a differential grade (via the journal’s name and track-record), but it improves them, through revision and re-refereeing. More generic 0/1 peer review, with less individual quality variation among journals, would just generate quality regression on the mean.

        Stevan Harnad

        24 July 2009 at 21:50

  2. Everything you’ve said makes sense to me, except that I don’t understand what it would mean in practice for peer review to be ‘reduced to generic pass/fail’ or for journal differences to be ‘flattened to generic peer review, entrusting the rest to metrics’. I haven’t heard anyone in academia say that peer review is a generic pass/fail grade for publishability or that it doesn’t matter which journal you publish in. On the contrary, as you say, we all know that some journals have higher standards than others, and experience gives us a good sense of the quality levels of the different journals in our own fields. Do you think there’s a real risk that we might lose that awareness, and if so, how could that happen?

    Benjamin Geer

    24 July 2009 at 22:05

    • Before I reply, let me give a bit of background:

      (1) I edited a peer-reviewed journal (with a very high “impact factor”) for nearly a quarter century (BBS).

      (2) Though it’s not my day-job, I am now an Open Access “Archivangelist”.

      (3) And although it certainly is not my view, there are some in the Open Access (OA) movement who think that OA will make it no longer necessary for peer review to be so exacting and selective; the open research community will be able to do, after publication, part of what peer-review had formerly had to do before (and as a precondition for) publication. The following quote is from the article under discussion, by Mark Patterson, publishing director of PLoS, which publishes the best and most important OA journals:

      “[J]udgements about impact and relevance can be left almost entirely to the period after publication. By peer-reviewing submissions purely for scientific rigour, ethical conduct and proper reporting before publication, articles can be assessed and published rapidly. Once articles have joined the published literature, the impact and relevance of the article can then be determined on the basis of the activity of the research community as a whole.”

      Now PLoS publishes excellent journals, some as selective and exacting in their peer-review standards as Science and Nature (and with an equally high impact factor).

      But most journals (whether OA or non-OA) are not that selective or exacting (although the top journals in each field probably are). There are also a lot of mediocre quality journals, and also quite a few frankly bad ones, especially ones published by big “journal fleet” publishers, who seem to emphasize quantity over quality. And alas OA has attracted some bottom-feeding fleet-journal wannabes too, eager to try to reap some of the high profits of the fleet publishers by trying to cash in on “author pays” OA journal publishing.

      And the way they do it is by dumbing down peer review (to drum up publication volume) while playing up OA. They offer authors quick “peer-reviewed publication” (at a price) without their (i.e., the OA publishers) having to worry much about quality standards.

      And they play up metrics, because they know that one of things authors like and seek is journals with high citation impacts. The bottom-feeders use the evidence that OA enhances citations to mask the fact that that effect only works for papers of quality. Low quality work will not get more cited no matter how open you make it.

      And the association of OA with the dumbing down of peer review is not doing anyone any good: not the OA movement, not research progress, and not research impact.

      Whether they are OA or non-OA, journals need to earn their status in a journal quality-hierarchy by practicing rigorous, selective peer review. The argument that peer-review and publication can now be much faster and less exacting, because OA and metrics will take care of the rest, is a specious one.

      Stevan Harnad

      25 July 2009 at 00:08

      • summary: PROBLEM = CLOSED PRODUCTION PROCESS. Opening of the final result (OA) is not good enough, not in social sciences.

        Hi Stevan, both Ben and I are PhD students, albeit in our late 30s, both with many years of Free Software, Open Source and corporate software and networks engineering. Here’s what’s puzzling me with peer review. Open collaboration in those software communities has proven to be a huge success in production of both knowledge (protocols) and highly useful and productive objects (software). While i can see your arguments of how OA enabled (author pays) proliferation of mediocre work, i have huge problem with the way blind and closed peer review works.

        Coming from open collaboration in software, it is unacceptable to me to not know who is judging my work (blind peer review). It is also unacceptable that peer review comments are not publicly available – why would not other knowledge workers learn from peer review process being open? And why can’t others judge the quality of peer review in public? Why not even comment on peer reviews, like we do any blog today? Like we do with software. What are your arguments against this, if any?

        I’m not against peer review, i’m against closed, anonymous, peer review. Coming to academia from software production and online political activism fields, in the world of tools for communication and cooperation that we live in, academic peer review process seems like a stone age operation to me – and it seems entirely inefficient, slow, ridden by personal vendettas and traumas for all sides involved. Why not do it all in public, like we do it software? I’m talking from perspective of social sciences.

        Another example: in my first year at LSE (PhD, Sociology) we were told that when one writes a very original, ground breaking work, it is incredibly hard to get it published, either as a book, or as a piece in a journal – because editors and peer reviewers, have little to compare it with and little to sell it along with (books need to sold along other books i.e. their relation to other works has to be understood by sellers and buyers). Hence, the advice we were given was to produce work that moves forward in small incremental steps, staying close to existing and recognizable body of knowledge. At the time, i was stunned by the advice – where would we be today if Tesla, or Edison, were told the same thing as students and followed it! Later i learned that they just told us the truth about getting published in social sciences.

        Now, if the entire process — from submissions, to editorial review, peer selection, peer review, revisions, and finally publish or reject decision — was done openly, like in software, i think that the madly counter-productive, yet truthful advice, i was given at first year would not make sense any more.

        At least when it comes to social sciences, i will argue (and i’ll blog here about it too) that such radical openness of the whole process will contribute to the increased quality of peer reviewing, of the work produced, and it will increase chances of the ground breaking work getting the attention it deserves.

        Finally, in your example of dumbed down peer review, in the open system that i’m proposing, such reviews would be exposed and recognized as such widely.

        In short, OA is a small, but important, step. What i really want to see, and what i believe will significantly improve social sciences is opening up of as much process of knowledge production as possible. Sure, argument that many authors will not want to have very negative peer reviews and rejections in the open stands.

        But then, if it was all in the open, perhaps authors would pay more attention to what they write and what they submit. And all i hear from friends who sit on editorial boards is horror stories of having to deal with constant incoming mountains of horrible work.

        Let’s have it in the open and authors will start paying attention to what they write.

        Oh, by the way, i think that the quality of work in social sciences is close to a total disaster (especially ratio of socially invested resources and what society gets back in return from it). I was puzzled why was this the case as an undegrad student few years ago. Now i think i know where the problem lies: closed production process.

        Toni Prug

        25 July 2009 at 01:52

  3. The issue of peer review reform has been much discussed: http://bit.ly/iEPBI

    All I can add is that suggestions (like yours) needed to be tested to see whether they (1) work (i.e., result in articles that are at least of the same quality as those produced by classical peer review), and, if so, whether they are (2) sustainable and (3) scaleable.

    My own guess is that most of your suggestions (which have been made many times, by the way) would not succeed on any of these counts, for reasons that are discussed in the link above (authors don’t want to make unrefereed papers public, qualified referees are rare and overused); nor is peer review closed (all identities are known to and answerable to the editor, who is in turn answerable to the readership. Junior referees will not criticize senior authors openly, for good reasons, etc. etc.

    I suggest either more direct experience in authoring and refereeing (perhaps even editing) refereed journal articles and/or devoting the years it would take to test your hypotheses: Writing scholarly and scientific research articles is not like writing open source or collaborative sofftware.

    Stevan Harnad

    25 July 2009 at 02:50

    • Stevan, we are in full agreement with the need for mandating self-archiving. That’s the direct goal of Free Our Books campaign, extended to books too – since books matter more in social sciences research assessments and in general in real terms impact (although many argue that this is changing, with journals gaining importance rapidly).

      However, your strategy — and i can see from reading plenty of the archives of mailing list link you sent me that you’ve been arguing for this for a decade (or even longer) — of separating peer review process changes from OA makes full sense, if the existing system works as well you’ve been saying. I trust your assessment, since i know little about natural sciences. From such position, it also fully makes sense that you don’t want OA delayed or distracted by possible peer review reforms. I can see all arguments for points there clearly.

      However, with social sciences, starting positions are different. Let me repeat: there’s an enormous lack of quality in social sciences publishing, both in books and in journals. This lack of quality is my starting point in assessing whether peer reviewing works, or not.

      However, i don’t think it’s a good use of my time to have lengthy arguments with people who disagree with my assessment of quality of social sciences (necessary to do some of it, but limited). Instead, like you suggested as a possibility, i’d rather work on showing that open peer reviewing process can deliver better results in social sciences – by either convincing existing journals that i do like to adopt it (or at least to experiment with it), or to start a new journal that is built around open processes.

      Since quality is unlikely to apply to all disciplines of scientific knowledge production equally – at minimum, i think that you should allow in your arguments the space for other disciplines to have different strategies. Unless you’re willing to argue that you have the insight in all academic disciplines and the quality of their outputs, which i doubt you are.

      I agree with you that i should test my ideas in actual real live process – that’s how the idea started (i’m working with journals to convince them to adopt new, open, peer reviewing). I also agree (argument you made to others on the mailing list) that i should test my argument about the lack of quality of social sciences research outputs in existing journals. However, i could end up a in catch 22 scenario here: peer reviewers might not like seeing a paper arguing how their model is ineffective. If that happens to be the case, my only avenue is to a) publish the paper and such peer reviews in public and invite colleagues to asses it on their own; b) demonstrate in practice, by implementing it and showing that research outputs are better, that open peer reviewing models are the way to go in social sciences.

      In short, by proposing blanket solutions, you’re making it harder for us in other disciplines to present our own arguments, based on our assessments of existing models and existing research output quality.

      Hence, i hope you’re ready to question your position, and limit your arguments to the fields you know well. Unless you’re ready to take the position of a judge for the quality of all scientific disciplines.

      Toni Prug

      25 July 2009 at 13:23

      • Toni, a few points:

        (1) I am myself partly in the social sciences (psychology).

        (2) I agree that research quality is not high (in the fields I know).

        (3) But I doubt that that low quality is because of low peer review standards: It’s much more likely to be because of a high quantity of low quality research.

        (4) Absolutely everything is getting published, somewhere or other, so it is certainly not that the high quality research is being suppressed.

        (5) With OA, research will have a much better chance of making the impact it deserves, even if it happens to appear in a lower quality journal than it deserves.

        (6) Books are a more complicated matter than journal articles, because they are not all (nor even mostly) author give-aways (yet), as articles are, written solely for uptake and impact, rather than out of some hope of royalties.

        (7) But this may change now, partly because of the growth in OA to journal articles, and the growing evidence of the enhanced uptake and impact that OA generates, and partly because there will soon be book impact metrics too, starting with book citation impact. (An international team from the US, UK and Canada [Giles, Penn State; Carr, Southampton; and my group, UQAM] have just submitted to the Digging Into Data Challenge a proposal to create a book impact index (for all scholarly and scientific disicplines, but especially the book-intensive ones in the Humanities and Social Sciences).

        (8) I am definitely against the promotion and implementation of untested alternatives to peer review — but I’m all for testing them.

        (9) My focus, however, is on promoting and facilitating OA mandates (with repository software, impact metrics, and policy guidance).

        (10) Judging by the slow rate of progress, despite all my years of effort, in this, my main objective (Green OA self-archiving), I wouldn’t worry too much if I were you, about the deterrent effects of my view on the need to test peer review reforms before implementing and promoting them…

        Stevan Harnad

        25 July 2009 at 18:18

  4. Stevan, it seems to me that the problem you describe, of journals that apply low quality standards because they can generate revenue by doing so, could be eliminated by eliminating the need for journals to generate their own revenue. Whether a journal depends on generating revenue from subscriptions or from author fees, there will always be a temptation to dumb down the content, either to sell subscriptions to a wider audience, to attract more fees from authors, or even to bring in corporate sponsorship (as in the case of the fake journals that Elsevier published for pharmaceutical companies).

    The only way to ensure that financial interest will not interfere with journal quality is to remove financial interest from journal publishing, and it seems to me that OA is a necessary (but not sufficient) condition for this. A recent survey of open-access journals published using the OJS software found that most of the journals ‘reported small (or zero) expenses and revenues’. Moreover, in most of them, ‘the editor is personally responsible for copy editing, layout, and proofreading’, yet ‘editing requires less than ten hours per week’. If a journal can indeed be run on a very low budget, employing only a single part-time editor, it should be a relatively simple matter to eliminate both subscription fees and author fees, e.g. by funding journals out of university budgets. Indeed, in this survey, ‘more than half the journals were sponsored by academic departments’. To me, that looks like a good way to eliminate the financial interests that can damage the quality of journals.

    Benjamin Geer

    25 July 2009 at 09:45

  5. There are about 25,000 peer-reviewed journals, across all fields, publishing 2.5 million articles a year today. The scaled-down shoestring budget you describe is closer to the truth of what their minimal expenses could be reduced to, but it is an exaggeration. Even for the real costs, there is the question of how to get there from here. There is no point just imagining that it would be possible. A transition scenario is needed, otherwise, as with untested speculations about peer review reform, one is just conducting an armchair exercise.

    What is needed is a viable transition scenario; OA is part of it: http://www.nature.com/nature/debates/e-access/Articles/harnad.html#B1

    My own interest, however, is not in publishing economics, but in OA itself. And for that, it is also clear that what is needed is OA self-archiving mandates by institutions and funders. So that is all that I am working for.

    Yes, there are junk journals no matter who is paying — user-institutions for journal subscription or author-institutions for article publication. Subscription-based junk journals have never gone as low in their standards as the new spate of author-pays OA journals, but I don’t pay much attention to that either, since the problem is not starting up new journals or journal-fleets (we have than enough already), but getting the 2.5 million annual articles in the existing ones to be made OA. And the mandates will do that.

    After that, the future of publishing and or peer review can and will take care of itself. But before that, speculations about alternatives to peer review or alternative cost-recovery models for publishing are simply distractions for what really needs to be done: Universal OA self-archiving mandates adopted by institutions and funders.

    Stevan Harnad

    25 July 2009 at 10:05

  6. Stevan, your proposed transition scenario looks reasonable to me, except for the idea of self-archiving preprints. As you pointed out in your reply to Toni above, authors don’t want to make unrefereed papers public, and I think there are good reasons for this.

    I’ve recently had an article accepted by a peer-reviewed journal, and the version they finally accepted is very different from (and, I think, much better than) the version I initially submitted, to the point that I wouldn’t want anyone to cite the original draft, so I’m glad I didn’t self-archive that original draft. After the article was accepted, the editor copyedited it, and I reworded some parts as a result; thus the aticle was improved yet again. The final copyediting won’t take place until shortly before the article is actually published, so I’m going to wait to have the definitive version before self-archiving it. If people cite the article, I want them to cite the published version, particularly if they’re going to criticise it; if people were arguing over different versions of the same article, the result could only be confusion.

    In my view, this means that journals have to allow immediate self-archiving of the published versions of articles. How can they be made to accept this? (Strangely, the journal that’s publishing my article allows me to self-archive the published version on my ‘personal or departmental web page’ as soon as it’s published, but requires me to wait a year before putting it in an institutional or subject repository.)

    Benjamin Geer

    25 July 2009 at 10:50

  7. Unfortunately, you have misunderstood:

    (1) The Green OA self-archiving mandates by institutions and funders require deposit of the refereed, revised, accepted final draft (the “postprint”) — definitely not the unrefereed preprint, which is merely optional: http://openaccess.eprints.org/index.php?/archives/494-guid.html

    (2) 97% of journals already endorse immediate Green OA self-archiving, 64% for the postprint: http://romeo.eprints.org/stats.php

    (3) All postprints can be immediately deposited, regardless of whether the publisher endorses immediate postprint OA. The repository’s “email eprint request” Button can tide over user needs for Closed Access deposits with with “Almost OA” during any publisher embargo:
    http://openaccess.eprints.org/index.php?/archives/274-guid.html

    (4) Hence the retardant is neither publishers nor preprints. It is just the slowness of the research community in self-archiving spontaneously, and the slowness of their institutions and funders in mandating it.

    Stevan Harnad

    25 July 2009 at 12:00

  8. OK, what confused me was your statement: ‘Nor are copyright restrictions an obstacle to self-archiving: preprints can be self-archived without any restriction at the time the paper is submitted to a journal.’

    However, I also see that the Immediate-Deposit/Optional-Access (ID/OA) Mandate calls for self-archiving ‘the author’s final, peer-reviewed draft’, and I’m still opposed to that, because in reality it’s not the final draft, i.e. it’s not exactly the same as the text that will be published after copyediting (which can involve numerous small changes).

    If I insist on self-archiving only the published version, my only option under an ID/OA mandate would be to ‘provisionally set to Closed Access (with only the metadata, but not the full-text, accessible webwide)’, with an embargo of one year, since that’s what the publisher imposes. In my view, that’s not OA. In fact, I’m not prepared to use the term Open Access to refer to anything except OA to the published version as soon as it’s published. In my view, the ‘pale-green’ journals at http://romeo.eprints.org/stats.php should not be called green at all.

    Nor am I satisified with your suggestion of an ’email eprint request’ button. Some academics are extremely slow to respond to emails; by the time they reply, the deadline for which you needed their article has passed. ‘Almost OA’ is not OA.

    Therefore my question remains: how can the 36% of journals that don’t allow immediate self-archiving of postprints be persuaded to change their policy?

    Benjamin Geer

    26 July 2009 at 02:48

  9. “I’m not prepared to use the term Open Access to refer to anything except OA to the published version as soon as it’s published”… ‘Almost OA’ is not OA.”

    Agreed.

    The purpose of the ID/OA (Immediate-Deposit/OPTIONAL-Access) mandate is to ensure that the mandate requires immediate deposit of all articles, whether or not the deposit is made immediately OA. The alternative has been (and alas still is) mandating deposit only after the publisher embargo has elapsed, or allowing authors to opt-out where there are publisher complications.

    ID/OA allows the “email eprint request” Button to provide “Almost OA” for users when the alternative would have provided no access at all during the embargo: http://openaccess.eprints.org/index.php?/archives/494-guid.html

    You need to remember the current status quo, and user needs (especially researcher-user needs) during the embargo before you take too uncompromising a stance on strategy, especially since the answer to your question —

    “how can the 36% of journals that don’t allow immediate self-archiving of postprints be persuaded to change their policy?”

    — is that it is the author and user pressure from universal ID/OA mandates (64% OA + 36% Almost-OA) that will cause the natural and well-deserved death of publisher embargoes, whereas embargoed-access mandates (or no mandates at all, because of bad institutional legal advice on the perils of the dissenting 36%) will not.

    “I’m… opposed to… self-archiving ‘the author’s final, peer-reviewed draft’… because in reality it’s not the final draft, i.e. it’s not exactly the same as the text that will be published after copyediting…”

    So all those would-be users who lack access today should stay that way, because you are opposed to the self-archiving of refereed but non-copy-edited drafts? I’m not sure those would-be users would agree with you. (And I’m not sure there’s all that much copy-editing being done with journal articles these days anyway…)

    Stevan Harnad

    26 July 2009 at 03:56

  10. The article I’ve been referring to was heavily copyedited. For example, the editor removed all the hedges (‘perhaps’, ‘appears’, ‘it seems likely that’, etc.), and as a result I had to add more specific information in order not to overstate my certainty about something. As a result, someone citing the ‘final refereed version’ might very well be citing a text that’s significantly different from (and perhaps significantly inferior to) the published version. You and I agreed earlier that reduced quality shouldn’t be a side effect of OA; isn’t this an example of exactly that problem?

    Let me turn your question around: all those would-be users who lack access today should stay that way, because many journals are opposed to immediate self-archiving of postprints? Instead of accepting their terms, I think we should be putting pressure on them to change, but I don’t see how ID/OA will have that effect. I personally have put no pressure on any particular journal, nor do I intend to, because my bargaining position as an individual author is too weak. I think the threat of mass migration of authors from closed-access to Gold OA journals would be a more effective means of persuasion.

    Benjamin Geer

    26 July 2009 at 04:22

  11. (1) To determine whether your own personal experience with one article is representative of the extent and substance of copy editing for journal articles today (in your field, or across all fields) requires a lot more data and research than what you have described.

    (2) The issue is moot in any case, because the author can always update his final, accepted, refereed draft (“postprint”) to include any further substantive changes (including corrections of errors that have not been caught by either the referees or the copy-editor), indicating by footnote or format which changes are actually post-publication corrigenda.

    (3) You speak of the alternative of “putting pressure on [journals] to change.” But that pressure is exactly what ID/OA mandates, the “Almost OA Button,” and the immediate growth potential of OA (64% OA + 36% “Almost OA”) itself are jointly exerting, whereas you have not given any realistic indication of what alternative form of pressure you have in mind, nor evidence that it will actually be exerted, nor that it will work.

    (4) Most important, you unfortunately appear to be unaware that far from being “a more effective means of persuasion,” the “threat of mass migration of authors from closed-access to Gold OA journals” has already been tried, famously, and has failed, resoundingly (and, I might add, predictably). I have even given the phenomenon a name; the “keystroke koan”:

    “Why did 34,000 researchers sign a threat in 2000 to boycott their journals unless those journals agreed to provide open access to their articles – when the researchers themselves could provide open access (OA) to their own articles by self-archiving them on their own institutional websites?”

    (5) It is in fact the absence of substantive pressure from researchers themselves — the ultimate co-beneficiaries of OA — that made it clear that “keystroke” mandates were needed from their institutions and funders. The requisite keystrokes are not those of the empty threat by authors to boycott the planet’s 25K journals (with no credible alternative), but the few keystrokes it takes to actually deposit their articles in their Institutional Repositories to make them OA (or Almost-OA) — as only 15% of them are doing spontaneously today, unmandated, whereas over 90% of authors surveyed across all disciplines have said they will do (and actual outcome studies have confirmed that they actually do do) if (and only if) those keystrokes are mandated by their institutions and/or funders.

    Now Benjamin, I hope you will understand that I cannot continue this blog discussion with you, because (as I also hope is evident by now), all I am really doing is rehearsing the history of the OA movement for you: what has already been thought, said, and done, false starts and all, and the reason it has taken the direction it has taken. My writing out, yet again, what has already been written out so many times before is alas neither an effective route to global progress nor a fair substitute for your reading what has already been written directly for yourself. I have now given you a taste for what has happened and why. If your interest is serious (and I have no doubt that it is), and you wish to go beyond personal anecdotal experience and conjectures, I encourage you to consult more samples from the AmSci Forum (which chronicles it all, since 1998), or the many articles I and others (especially Peter Suber) have written about OA and related matters. The links are all there on my home page.

    Stevan Harnad

    26 July 2009 at 11:57

    • This discussion has been a significant step up for me in learning the history of OA and arguments rehearsed. I’ll take your advice Stevan – to not worry too much about your differing views on peer review and publishing process changes – as you will see from the coming post. Thanks for sharing all this with us. I’m working on an OA mandate at my dept (Queen Mary, Business School). First thing is to get the repository hosting. I emailed info@services.eprints.org (as stated at http://www.eprints.org/services/sales/ ) several days ago, asking for costs and options, but no response so far. Will phone them on Monday.

      Toni Prug

      26 July 2009 at 15:30

  12. Thanks for taking the time to discuss all this with us. I’ll keep reading as you suggest.

    Benjamin Geer

    26 July 2009 at 15:18

  13. […] is a great new blog Free our Books and I have linked to an entry that has generated a wealth of interesting comments on peer […]


Leave a comment