Free Our Books

because books want freedom, too

Science is already a wiki, just a really inefficient one

with 13 comments

In another excellent post, Publishing science on the web, John Wilbanks reacts to discussions he had with British Library staff and leaders, who told him that:

Publishers frequently claim four functions: registration (when was an idea stated?), certification (is the idea original, has it been “proved” to satisfactory peer review?), dissemination (delivery), and preservation of the record. The journal thus provides for both the claiming of ideas by scientists and for the “memory” of the sciences.

He reponds by saying that the Web already does a lot of this, outside of journals and existing scientific and publishing mechanisms: “Wikis and blogs provide almost costless registration and dissemination of new scientific communication.” However, resistance to integration of the Web into science is strong. Referring to science as an inefficient wiki, he states that, in the current model of scientific production “the incremental edits are made in papers instead of wikispace, and significant effort is expended to recapitulate the existing knowledge in a paper in order to support the one-to-three new assertions made in any one paper”. Another major problem he underlines, one that i find hugely problematic in my own work, is that each disciplines has its own highly specialist language through which it operates. This is a problem for creating scientific mashups:

Right now the problem is we still think about cross disciplinarity as a function of people choosing to work together. But the internet and the Web give us a different model. What’s more cross disciplinary than Google?

Although we use Google daily in our research, and it leads us across scientific fields, the obstacles are still  both the language barrier between fields and “lack of knowledge interoperability at the machine level”.

Written by KontraMraku

6 August 2009 at 12:27

Posted in Quick Research

Tagged with

13 Responses

Subscribe to comments with RSS.

  1. The web doesn’t actually do any of that. There are no mechanisms on the web (other than scientific publishers themselves) for verifying when something was first published on line. Nor does the web provide peer review. A wiki is by definition not peer review; it’s review by any idiot who happens to be passing by and feels like posting on the wiki. And wikis and blogs are not long-term archives; they can disappear at any time.

    As for scientific jargon, try to imagine what would happen if physicists had to write their articles in a language understandable by anyone. First of all, they’d have to get rid of all the mathematical symbols, because most people don’t understand them. Physics would be set back a few thousand years as physicists tried to explain everything in words.

    Benjamin Geer

    6 August 2009 at 15:03

  2. You give priority to an organization which prints a date onto a piece of paper, to an electronic stamp in blog posts. Not only can i show when was something published on a WordPress blog, with the right plugins installed (not sure do we have those features here with WP hosted blog), i can show the differences between each version too. You know all this. It’s just that you assign trust and authority to an organization that prints of paper with some known organizational and procedural structures, to have more validity over someone using server time leaving time stamps over each edit.

    My view is that we need to merge the two. Blogs, as publishing mediums, can become also part of a more defined organizational and procedural model, applying some of the aspects of journals. I’m looking forward to that happening, and i won’t sit around waiting for other to do it. Your comment actually gave me concrete ideas what to do – so good to hear your blasts any time 🙂

    You conflate three concepts of a peer. One, used in academia and politics, is a person whose authority has been assigned and certified to perform some functions in society (it takes PhD to become and academic peer, it takes election of nomination, or both, to be a peer in politics, depending on the detailed implementation).

    In the other one, in networking, a peer is anyone who accepts protocols that other peers use and takes parts in exchanges. The goal of peers networking can be communication, co-operative production, or exchange of existing entities (documents, files).

    In the third model, the Web model (i’m working on developing it), which is the one i’m interested in most, a peer is so loosely defined, that it only vaguely resembles two above models. For example, i would see all contributors to my research paper as peers as long as they contribute in ways that i find useful to be productive for either: improving the work; for exposing its weak sides; improving, or disproving main arguments made in the paper; adding, pointing to relevant references. As a primary research paper author, i reserve the right to make calls who and why satisfies such criteria and to change criteria. Criteria should be known explicitly, although in academia, from the point of view of author, we roughly do have a shared understanding of what is a useful contribution. When blogs become more like journals, a collective of authors and/or editors will be making such decisions. I can see possibilities clearly and i’am looking forward to experimenting with various models.

    Libraries burn too. And it’s easier to back up and restore an electronic archive, than it is to print an entire library that burns. Yes, we’re better at protection against fire, so that doesn’t happen often any more. We are getting better at electronic back and restore too. Some argue that we’re still nowhere near the quality of physical preservation, hence many libraries still buy microfilms – there was recently a good blog post by a librarian on this. I can’t see any intrinsic qualities of paper of electrons. They’re both material world, that i can use for the same purpose. They do differ and i agree currently that paper has an advantage in terms of long term preservation. Although some people i spoke to working in this area, disagree with me, and give primacy to electronic preservation. I don’t really know this one, i don’t know enough to judge it. But i’m happy we have options other than paper. Takes less space, easier to copy and share. Far easier to expose for collaboration than paper.

    Last point, jargon, i was thinking of social sciences and humanities. They need far less jargon than they use these days – that’s my experience of reading across several disciplines. To express complex ideas, we might need at times complex language. But on many occasions, we are trained to think in ways which are field specific. Or we just don’t think very clearly. That is part of becoming a type 1, certified, peer.

    Toni Prug

    6 August 2009 at 15:57

  3. It’s just that you assign trust and authority to an organization that prints of paper with some known organizational and procedural structures, to have more validity over someone using server time leaving time stamps over each edit.

    Exactly. Individuals can fake timestamps, or forget to pay hosting bills. The only way to prevent that is to create institutions, like universities that maintain repositories. My point was just that you can’t simply trust ‘the web’ to take care of these things spontaneously. Currently, publishers are institutions that perform these tasks; if you want to get rid of publishers, you need some other institutions (and not ‘the web’) to perform the same tasks.

    As a primary research paper author, i reserve the right to make calls who and why satisfies such criteria.

    That won’t work. Suppose you’re looking for an authoritative article in a field that you don’t know anything about. How will you select one? Currently you can go to a prestigious journal, and you can feel confident that the articles published there have been approved by competent reviewers. Suppose we got rid of all the journals, and everyone just posted their own articles on their own wiki, and let their friends comment. How would you — an outsider — tell the difference between the competent authors (and reviewers) and the charlatans?

    Last point, jargon, i was thinking of social sciences and humanities. They need far less jargon than they use these days

    The problem isn’t jargon, it’s the concepts themselves. In order to understand current research in physics, medicine, sociology, philosophy, or any other discipline, you have to know the history of the field and the theories that are currently being debated.

    Take nationalism studies, for example. Benedict Anderson’s theory relies on a concept he calls ‘print-capitalism’. This particular piece of jargon is a convenient name for a particular set of complex ideas. Anderson used this term because he needed a concise way to refer to that big chunk of ideas, and it’s difficult for me to see how we could get rid of this term when discussing his theory. The problem for outsiders isn’t the term ‘print-capitalism’ in itself; it’s the need to understand the theory. And of course, in order to understand the word ‘capitalism’, you need to know something about Marxism. Thus any article in any field presumes knowledge of the whole history of the field.

    For me, peer review is review by people who know this history. Those are the only people who are in a position to judge current work in the field.

    Benjamin Geer

    6 August 2009 at 16:35

    • Any group of humans acting together that can be held accountable for its acts can perform elements of what institutions do. Since mechanisms, technologies, of being able to act together and being held accountable are changing, so will our notions what institutions are. It will take time. But on-line collaboration will get institutionalized. It already has been. Is not Mozilla one such form? Combination of employees, direct managerial control, and thousands of contributors, act together. Mozilla Foundation and Mozilla Corporation are instutionalized in classical sense, but they part of a hybrid model that relies on volunteers and peers not choose by line managers, but by type 3 peers themselves. I’m looking forward when some of it will penetrate academia. I think it’s starting to happen through blogs. In areas in which i work, current sources of authority are not a very good guarantee of quality nor of peer selection. It’s far better than if everything was free-for-all. But nor are blog free-for-all, like i explained with academic blogs, nor am i imagining free-for-all structures when i consider Web collaborations as a new place to form institutions and to take part of the functionality of the existing institutions.

      Toni Prug

      6 August 2009 at 17:02

      • OK, but Mozilla is not a wiki or a blog. It’s not easy to get them to accept a patch. I was reacting to this sentence that you quoted in the post above:

        Wikis and blogs provide almost costless registration and dissemination of new scientific communication.

        The implication seemed to be that journals could be replaced by wikis, and I think that’s preposterous. Yes, maybe journals could be replaced by something like the Mozilla foundation, with its strict hierarchy of ‘module owners’ and ‘peers’.

        Benjamin Geer

        6 August 2009 at 17:19

        • IETF has concepts of Protocol ownership and Technical competence that provide ways to assign responsibility to projects and to assert a form of quality control. BUT, IETF will allow anyone to say: “i’m a peer too! i’m competent enough! i accept the rules of engagement, and here’s my first patch, or contribution to a protocol discussion.” Academia should do the same.

          Toni Prug

          6 August 2009 at 17:35

          • It already does. Anyone can submit an article to any peer-reviewed journal.

            Benjamin Geer

            6 August 2009 at 17:38

          • … but it doesn’t allow anyone to become a peer reviewer overnight. You have to prove yourself first…

            Benjamin Geer

            6 August 2009 at 17:40

            • and what exactly happens inside the peer review process is not visible to anyone, other than those few participants. It’s a black box way. If we do it the IETF way, the process would be open and far more accountable. You’re right that a culture of not quoting drafts while within peer review would be essential – for those who care about that aspect. I don’t for some texts. For others i care too and wish to have the protection of not being quoted before i call it done. But i can’t see reasons to keep it all closed process as it is now, especially since it doesn’t produce much of what i consider good work.

              Toni Prug

              6 August 2009 at 17:48

              • The peer review process of Linux and Mozilla is transparent (things are done on public mailing lists), but only a few people have commit privileges. I don’t know exactly how the IETF works, but I suspect that they have some formal or informal way of keeping unqualified contributors from gaining authority, too.

                Benjamin Geer

                6 August 2009 at 18:15

  4. Here’s an example to illustrate the point about institutions.

    Scientific publishing giant Elsevier put out a total of six publications between 2000 and 2005 that were sponsored by unnamed pharmaceutical companies and looked like peer reviewed medical journals, but did not disclose sponsorship, the company has admitted.

    If we got rid of journals, and everyone just posted their own articles on their own wikis, I would expect this to happen constantly. Pharmaceutical companies would create thousands of wikis to publish advertisements disguised as research. Like you, they would ‘reserve the right’ to decide who could post comments on those wikis, and would pay people to post positive comments.

    ‘The web’ cannot solve this problem. The problem isn’t with the technology, it’s with the institution, i.e. Elsevier, a profit-making company. I think the only way to solve this problem is to remove the profit motive and make that institution accountable to researchers rather than to shareholders.

    Benjamin Geer

    6 August 2009 at 16:53

    • I agree with both points. What i suggested is to improve, and not remove, journals, based on available technologies and organizational and production methods on-line tools and practices make possible. And i didn’t suggest the Web can solve any problems on its own. But what has been created on, and for, the Web, new methods (see IETF mission statement) and technologies (wikis,blogs,mailing lists,workflow CMS websites,shared calendars, syndication of all kinds, etc) is very likely to change how we work radically, and hence change both journals and institutions of all kinds.

      Profit-making is being installed into the centre of knowledge production for decades i.e. there are far bigger problems that of few fake journals – like official UK government policy that demands of universities to be for-profit business minded organizations.

      Toni Prug

      6 August 2009 at 17:21

  5. Yes, for-profit universities are the same problem. Research has to be made more autonomous from economic and political power. The only way to make a field more autonomous is to make it difficult for people to enter the field (e.g. by participating in peer review) without having the necessary academic competence. If online collaboration enables ‘anyone’ to be a peer reviewer, the result will be less autonomy, not more. On the other hand, if is structured in such a way as to enable a larger number of truly competent people to participate, while keeping out everyone else, it will increase the autonomy of the field by increasing competition among those who are qualified to compete, and thus raising standards.

    Benjamin Geer

    6 August 2009 at 17:33


Leave a reply to Toni Prug Cancel reply