Open access will remain a half-revolution

The world will surely get open access but the research community will probably fail to resolve the affordability problem that led many to join the OA movement in the first place, claims Richard Poynder in conversation with Michał Starczewski. Publishers appear to be the only “stakeholders” who are relatively organised and coherent about open access and it is to them that the paymasters are turning. OA advocates have failed to take responsibility for the movement. Furthermore, there does not appear to be much appetite in the research community for giving up publishing in prestigious journals, and abandoning the notorious Impact Factor. Therefore essentially, the focus needs to shift from, “How can we force researchers to embrace OA?” to, “How can we use the network to create a more effective and efficient scholarly communication system, one fit for the 21st Century?”

 

 

 

 

 

 

 

 

Richard Poynder is an independent journalist and blogger specialising in information technology, scholarly communication, professional online database services, open science, e-Science, and intellectual property. Richard takes a particular interest in the Open Access movement, whose development he has been following for more than a decade.

 

Michał Starczewski: Do you think that openness is already a new standard in the world of scholarly communication, or is it still an ongoing experiment?

Richard Poynder: Well, openness is certainly fast becoming a new standard in scholarly communication. What we don’t yet know, however, is exactly what openness means (or should mean) in this context, and exactly what processes and outputs it should apply to (and to what degree). We also don’t know who should best fund it, provide it, and manage it.


The OA movement is more than 20 years old. What surprised you most during this period?

What has surprised me most is the OA movement’s lack of organisation, or clear strategy on how to make OA a reality. As a consequence, we are now some 15 years away from the Budapest Open Access Initiative (where the term open access was adopted), and much still has to be achieved, not least clarifying the issues listed in my last answer. Apart from anything else, we still have no conclusive definition of open access. Given this, it is no surprise that there is a great deal of confusion about open access.

I think there are two main reasons for the failure of the OA movement to take a more structured approach. First, the research community is not actually very good at organising itself, particularly on a global scale. And it doesn’t help that researchers are increasingly incentivised to compete more than co-operate with one another.

Second, OA advocates have tended to approach open access more as if it were a religion than a pragmatic response to the possibilities the network provides to improve both the research process and scholarly communication (which should surely be the ultimate goal of open access).

These two factors have generated unhealthy schisms and disputes within the movement, with advocates spending too much time arguing over doctrine. We have also seen OA advocates become addicted to cheerleading and the shouting of slogans, which has deflected them from devoting sufficient time to developing practical strategies and tools to achieve open access. The assumption was that all that was required was to “convert” colleagues. When the movement failed to do that it began lobbying funders and institutions demanding that researchers be compelled to embrace OA, essentially they sought to offload the responsibility onto others.

It also has to be said that the strategies proposed and/or supported by OA advocates have often been cockeyed — not least the concept of the article-processing charge (APC). That anyone ever thought pay-to-publish was a sensible way of disseminating research is most odd. Not only is it impractical, but it has played into the hands of profit-hungry legacy publishers, and indeed any fly-by-night cowboy able to create a web site

I have also been surprised at how disconnected OA advocates are from the views of the wider research community — a tendency exacerbated by their habit of gathering together in their echo chamber of choice (conference hall, social media etc.) where their beliefs, prejudices and misconceptions are reinforced rather than subjected to a reality check. 

The recent Berlin 12 meeting suggests that this ghettoisation is increasing. As the meeting was entirely focused on “flipping” subscription journals to OA models it was “by invitation only” and the organisers chose not to invite any prominent green OA advocates, presumably to avoid any dissenting voices questioning the premise of the plan (although we cannot state this as a fact since the delegate list was secret).

All of which is to say that I have been surprised at how open access has been treated as a “cause” rather than a solution. And despite what OA advocates like to claim, the movement is not by nature democratic, but evangelical.

The French Christian philosopher Blaise Pascal said, “The heart has its reasons of which reason knows nothing”. OA advocates have sought to persuade colleagues by appealing to their hearts rather than their reason. While this approach may make sense in the context of deciding whether to believe in God (aka Pascal’s Wager), it is not very helpful when trying to persuade people of the need to change the way that research is disseminated.  

And it is my belief that this approach has not only slowed progress but is allowing legacy publishers to co-opt the movement for their own ends.


What do you think about the current role of publishers? Is there a point where researchers, librarians and publishers can meet? Is it possible to reach a compromise that would satisfy all stakeholders?

What is interesting here is that OA advocates have at least managed to persuade funders and institutions to force their colleagues down the OA road (often for reasons not directly related to open access), so it is the paymasters that are now driving events. Since this offers publishers little choice but to commit to open access, some form of OA looks sure to become the norm.

The problem is that since publishers appear to be the only “stakeholders” who are relatively organised and coherent about open access it is to them that the paymasters are turning. At the same time, publishers have frightened funders into believing that unless OA is implemented in a way that poses no threat to their profits the entire research process could be jeopardised. It is this that is allowing publishers to appropriate open access for their own ends.

Timing and costs aside, this looks set to limit the extent to which the network is leveraged to improve scholarly communication. So I would say that publishers’ current role is probably a negative one.

What makes compromise especially difficult is that OA advocates view the world in a completely binary way. They see a world populated by good guys and bad guys, with no one in between. And they have cast publishers in the role of bad guy. Much of the discussion about OA is therefore focused on demonising publishers, who have (unfairly) become scapegoats (in the literal sense) for all the ills of academia.

It is therefore hard to see any common meeting ground. Rather we are likely to see governments and funders increasingly step in and impose open access requirements on the research community. And these requirements will likely suit publishers more than they will suit the research community, not least because it will see the triumph of gold OA, and the legacy journal will be embedded in the new environment — which is not the best outcome.

Ironically, while librarians have been the most vocal supporters of OA, and the most vociferous critics of publishers, the triumph of gold OA may suit them just fine. After all, with new money being thrown at them to manage and pay for gold OA, and subscription costs set to fall as the world moves to open access, librarians will surely feel that their lives have improved. Moreover, with funders also now providing money to pay for OA, library budgets will no longer have to bear the full burden of scholarly communication.


You have published on your blog many interviews, i.e. the series “The state of open access” (http://richardpoynder.co.uk/the-state-of-open-access.html). From your point of view, what are the most important conclusions?

In the hope of not repeating myself here I would say that the main conclusion I have reached is that OA advocates tend to delude themselves about OA — for the reasons stated above but also because they have consistently underestimated the difficulties of turning the great ship of scholarly communication around.

They have also tended to ignore or deny the new challenges and problems that open access introduces. I am talking here not just about the financial costs of trying to impose OA mandates (e.g. monitoring and managing them) but the negative effects that compulsory policies will inevitably have on the morale and job satisfaction of their colleagues. Consider, for instance, that the OA policy that the Higher Education Funding Council for England (HEFCE) is introducing effectively tells researchers that unless they make their research freely available they could lose their jobs. It does not help that it is proving horrendously difficult for researchers and librarians to navigate the plethora of new rules and processes that inevitably accompany these policies.

And of course we have seen the rise of predatory publishing, which OA advocates either deny is a problem, or assert that only the naïve fallen victim (and thus have only themselves to blame).

So OA advocates talk ad nauseam about the (at times questionable) benefits of OA, but say little or nothing about the costs (monetary, managerial etc.) of forcing it on their peers. They have also given little or no thought to the need to create the necessary infrastructure to implement OA policies. What infrastructure has been put in place has generally been inadequate and amateurish, and usually suffers a lingering death when funding runs out.

Indeed, most of the practical aspects of achieving OA have been an afterthought at best, with the underlying assumption that this is something others should worry about and provide. I think this is implicitly acknowledged in the recent Knowledge Exchange report “Putting Down Roots: Securing the Future of Open Access Policies, which notes that the success of open access policies relies on many disparate non-commercial services whose funding can disappear overnight. Moreover, the report adds, these services are now having to compete with profit-rich commercial organisations determined to maintain control of the scholarly communication process.

Even more damning, OA advocates frequently fail to comply with their own standards — not least when creating institutional repositories. As the above report notes, the potential value of what non-commercial tools have been created “is undermined by the limited adoption of the underlying standards and metadata on which they rely, and is also challenged by the presence of established commercial providers which offer more robust, proprietary datasets (e.g. Elsevier’s Scopus database and Thomson Reuters Web of Science).”

A good example of the latter point is the response we have seen to the OSTP Memorandum. In order to ensure that public access to research papers subject to the Memorandum will be mediated by them, and provided on their sites (rather than by repositories outside their control) publishers quickly created CHORUS. Librarians responded with SHARE, which appears to be modest by comparison, will have to rely on publishers’ co-operation, and will doubtless struggle for funding. 

What we see here is part and parcel of what Geoffrey Bilder likes to call “the enclosure of scholarly infrastructure”. This growing enclosure must limit what the OA movement is able to achieve, and by not having anticipated it OA advocates are all the more powerless to prevent it.

Consider also that the poster child of the green OA movement — arXiv — recently reported that it is facing substantial financial pressures. As a result, it said, it will need to “embark on a significant fund raising effort”. Strikingly, it added that it will first have to “create a compelling and coherent vision to be able to persuasively articulate our fund raising goals beyond the current sustainability plan that aims to support the baseline operation.” This comes as the service approaches its 25th Anniversary. Is it not therefore a little late coming?

And when reality intervenes OA advocates are not very good at adapting to circumstances. Rather, they tend to respond by doubling down and repeating the same old mantras, or reinventing history. They have always claimed, for instance, that (unlike the subscription model) gold OA will impose market forces and price discipline on scholarly communication, and so force down costs. We have not seen this happen, and it now seems highly unlikely that we ever will. OA advocates nevertheless continue to insist: “It will happen; it just hasn’t happened yet”. Or they change tack, insisting: “Well, we always said the main goal is access, not affordability”.

It is noteworthy that only recently (14 years after BOAI) OA advocate (and former employee of OA publisher PLOS) Cameron Neylon conceded on Twitter: “I may be completely wrong on APCs. No functional market is emerging and it might be the wrong economic model.”

What is distinctive about Neylon is that he is one of a small group of thoughtful OA advocates, and it is to his credit that he has finally acknowledged the problem. However, even if he managed to persuade his colleagues in the movement that there is a fundamental problem with the primary OA business model, and this encouraged them to take action, it would surely be a case of seeking to shut the stable door after the horse has bolted.

So my main conclusion is that OA advocates have failed to take responsibility for the movement. And as a result, publishers have repeatedly been able to outsmart them.


What differences are there in the attitudes towards OA and its implementation in poorer or richer countries? What impact does wealth have on discussions about open science?

One would certainly assume that wealth would influence attitudes and discussions about open access. One would therefore expect that the Global South would major on green OA and the North on gold OA, since there is more money sloshing around to pay for gold OA in the North, and green OA is (erroneously) viewed as costless. But I wonder if it is as simple as that. 

For instance, there is a greater focus in wealthy North America on green OA, and a greater focus in wealthy Europe on gold OA. Of course, this may be a superficial difference because green mandates can be fulfilled with gold OA. And as funding for gold OA increases we can expect even major green mandates like the NIH Public Access Policy to increasingly be fulfilled by researchers paying for gold OA — if only because it is so much easier to do so.

Of note here, Cambridge University’s Danny Kingsley has recently suggested that the HEFCE policy (widely celebrated as the quintessential green OA mandate) may turn out to be a Trojan Horse for gold OA. Kingsley reports that nearly half of the HEFCE eligible articles submitted in 2015 were published as gold OA. And of these, 74% were hybrid OA — that is, captured by legacy publishers, and at great cost to Cambridge University.

What about the Global South, where researchers don’t generally have access to funds to pay for gold OA? In theory, of course, gold OA is still possible as open access publishers operate various fee waiver schemes. But legacy publishers are less likely to offer waivers, and as Raghavendra Gadagkar explained in a 2008 letter to Nature these schemes are very problematic. So one possibility is that we could see a global divide, with the North embracing gold OA and the South green OA.

For me there is here an interesting puzzle: the OA movement was kicked into life by George Soros’ Open Society Foundations (formerly known as the Open Society Institute), which organised and funded BOAI, and donated $3 million to the cause.

OSI’s interest in open access grew from a concern about the information divide that former Soviet Bloc countries and the Global South faced, and the Budapest meeting came in the wake of the earlier (1999) OSI initiative Electronic Information for Libraries, or EIFL (formerly eIFL.net).

EIFL was a recognition of the role that libraries play in the exchange of ideas, knowledge and information, and the development of open societies. To that end, OSI invested in library development and modernisation in the post-socialist countries of Central and Eastern Europe and the former Soviet Union. It also helped them secure subscription access to large portfolios of international journals by means of national Big Deals.

So we might want to wonder how an initiative that started out with concern about restricted information flows in less prosperous countries has given rise to a pay-to-publish regime that now threatens to limit their ability to share their research with the world. The information divide OSI set out to resolve appears to have simply been pulled inside out.

Again, however, the picture is more complex that it first appears, as the developing world is fast falling victim to the developed world’s obsession with publishing in prestigious journals, and embracing the same “publish or perish” culture that benefits bean counters and managers, but degrades the quality of published research. These prestigious journals, of course, invariably belong to international publishers based in the Global North. As a result, home-grown low-cost publishing solutions face a growing existential threat, as the publishing of research papers is effectively outsourced to the so-called publishing oligarchy, a development that also poses a threat to local OA distribution platforms like SciELO and Redalyc.

We saw an example of this in 2014, when the Brazilian funder CAPES announced its intention of “internationalising” 100 Brazilian journals by outsourcing them to a foreign publisher. To that end it organised a meeting to which a bunch of Brazilian editors and five large global for-profit publishers (Elsevier, Emerald, Springer, Taylor & Francis, and Wiley) were invited. Understandably, this sparked some heated local protest, not least from SciELO.

We should also note that while researchers in the Global South can currently make papers they publish in international journals freely available by means of green OA, the publishers of these journals are introducing increasingly restrictive self-archiving embargoes in the hope of emasculating green OA. In any case, as these publishers begin to flip their journals to OA, researchers will increasingly find that if they want to be published in international journals they have little choice but to embrace gold. 

Will this disenfranchise researchers in the Global South? Perhaps not. If you think about it, there are few governments that could not/would not pay legacy publishers their asking price — if they felt it was important (be it for subscription access or publishing fees). It is worth noting that when I spoke to librarians in Poland and Serbia in 2013 I was told that, due to national Big Deals, access to international research (via subscription journals) was not really a problem.

As publishers move to new-style Big Deals that combine subscriptions with APC costs, and pay-to-publish becomes the norm for international journals, it seems reasonable to assume that national governments in the Global South will find the money to pay for these new Big Deals, as they did for subscription access. And they will be able to justify doing so on the grounds that by paying to publish in international journals they can ensure that their research gets international visibility.

So while we can expect to see green OA policies continuing to be introduced in countries like Argentina, Mexico, Peru and India the accompanying repositories may increasingly be filled with papers for which a publication fee has been paid (i.e. gold OA) rather than self-archived papers. And given that many victims of predatory publishing are based in the developing world we should not doubt that at least some researchers there are able/willing to pay to publish, even if they have to meet the costs themselves. So I expect to see these countries gradually converted to national deals that buy publication rights as well as access rights, as we are seeing happening in Europe already.


Can openness improve research quality and how?

Openness certainly should improve quality — if it is done effectively. If for instance, it includes openness of both papers and data, and if open peer review is deployed.

Open data is important if we are to hope to address the reproducibility problem, and we can expect open peer review to improve the quality of papers, since reviewers would surely be more conscientious and thorough if their names were attached to reviews.

And if OA is combined with both open data and open peer review it should make it more difficult for researchers to publish erroneous, shoddy, fraudulent and/or fake research, or indeed to operate fake review scams.

However, open peer review needs to consist of more than just publishing reviewer names alongside papers. Their reports need to be freely available too. We also need to be cautious about calling for openness without considering any potential downsides — a point made recently by Stephan Lewandowsky and Dorothy Bishop.


Do you see libre OA getting more attention of policy-makers nowadays, or do most of them rest with OA gratis instead? What in your opinion will be the prevailing form of OA in the future? Will there be more libre OA, or will there be a consensus that gratis OA is enough?

Libre OA does certainly seem to be getting a lot of traction, and policy-makers do currently seem tempted to try and make it the norm. However, we can also see a lot of pushback from researchers, especially in the humanities, and especially where policy makers try to insist on CC BY. If funders persist in this there is a danger that all but the most dyed-in-the-wool OA advocates will become thoroughly put off OA. This would likely further hold back progress rather than advance it.

Many researchers view demands that they embrace libre OA as an attempt by funders and institutions to appropriate their intellectual property. Of course that is not true, but in the context of the increasing proletarianisation they are experiencing it is unsurprising that researchers should be suspicious. And while using CC BY does not mean giving up one’s IP, it does mean that the world at large can profit from your work. It also means forfeiting earnings that would otherwise be available.

I realise OA advocates insist that researchers only write for impact, not for payment, but many researchers do earn royalties from their work, particularly humanists. I do not know how common this is, but I know of one researcher who has just been offered a $130,000 advance on a book. How would researchers able to command royalties like this feel if they were told that all their future work has to be available under a CC BY licence?

And this is not an issue only for humanists. Scientists tend to express outrage when they discover that OA papers they have authored are appearing in book collections on Amazon, pulled together by some canny entrepreneur with his eye to the main chance, and sold at a high price.

That said, I can see the arguments for libre OA. I realise, for instance, that research published under an all rights reserved licence today will not enter the public domain for many, many years. I also understand that the tidal wave of papers now being produced each year means that few if any researchers can hope to process all the information relevant to their work, so machines will have to do much of the initial sifting (and indeed may eventually begin to make scientific discoveries themselves). So the right to mine research papers is becoming urgent.

But while I understand the “information overload” problem, and while I can see the potential offered by TDM, I feel bound to say that many of the papers published today appear to offer very little value, and so should probably never have been published in the first place. That they are is because most papers are now published to fill CVs, not to advance science. 

Leaving aside the fact that this obsession with filling CVs encourages cheating, fraud, and mediocrity, at some point CVs themselves start to suffer from information overload. Soon when a researcher applies for a job, or funding, it will be necessary to text mine their CV in order to extract the relevant data! What we need are fewer, better papers, not more low quality papers.

Regarding TDM I think a better approach might be for governments to amend copyright laws to ensure that researchers have an automatic right to mine papers to which they have lawful access. As I understand it, this is the approach that the UK government has taken, and it is an approach being proposing more widely (not least in France).

So which form of OA do I expect to become the prevailing form? In the short term I expect to see a mixed picture but as gold OA grows so libre OA will grow, and gratis will decline. However, I suspect we will see much less CC BY than some OA advocates would like.


Do you think that the struggle between the green and gold routes of OA will become stronger and stronger, or will a compromise be found? 

The struggle between gold and green may intensify in the short term, but unless something changes my expectation is that green OA will be increasingly side-lined. Green mandates will doubtless continue to grow, but as I noted they can be complied with by means of gold OA, and so as gold OA funding grows I would expect gold OA to become the default.

This is certainly what we are seeing in Europe right now: Research Councils UK is providing funds for UK institutions to pay for gold OA, universities in The Netherlands are signing new-style Big Deals that combine subscription access and gold OA publishing fees, and Germany’s Max Planck is seeking to engineer a mass “flipping” of journals from subscription to OA models.

Meanwhile publishers are persuading librarians that there is no need to host full-text journal articles in repositories, but simply to link to the OA version on the publisher’s site. This is the purpose of Elsevier’s Institutional Repository Pilot Project (which it expects other publishers to join at some point). In Elsevier’s terms, repositories will be reinvented as “vehicles for discovery” rather than sources of full text.

Repositories will of course also continue to grow and flourish. But while they will host things like theses, working papers, newsletters, conference papers and other grey literature in full-text, their role will increasingly be an archival one and a “shop window” for the institution’s research efforts. Research papers will be hosted on publisher sites and linked to.

Elsewhere, we can expect to see growing pressure on social networking platforms like Academia.edu and ResearchGate to redefine themselves as vehicles for discovery too (encouraged by the use of take-down notices). This appears already to be the situation with Mendeley, which was acquired by Elsevier in 2013. This tweet seems to say it all.

And as also noted, while green OA may appear to be the best choice for the Global South, the CAPES incident demonstrates the extent to which governments and funders in the developing world are likely to feel that in order to present themselves as serious centres of “research excellence” they need to have their scientists publish in international journals, which will increasingly mean paying to publish.

If there is any sort of compromise in sight I think it will be some form of what is variously called diamond or platinum OA, where there are no paywalls and authors pay no publishing charges. However, that still leaves us with the question of who will pay, and how much. Equally importantly, how transparent will the costs be?

And while platinum/diamond OA is currently discussed in terms of the research community’s “reclaiming of ownership of the mission of scholarly communication” and by referencing initiatives like the Episciences Project, we should note that commercial publisher De Gruyter has been offering what it calls a “publisher pays” model for some time. This allows research institutes, societies, universities and other organisations to pay all the costs of publishing, so that the content is made freely available and the author has to pay no publication fee. De Gruyter says this option is proving very popular, and currently it publishes over 500 open access society journals in this way.

For me the interesting question is: at what point does platinum open access become no different to the membership schemes that OA publishers have long operated, or the new-style Big Deals that Dutch universities are currently inking with legacy publishers.

Once again, however, this is a compromise that can be expected to suit publishers more than it suits the research community.


What areas have the most potential of embracing such experimental forms of conducting research as open notebooks or open peer review? Do you see any particular barriers that could prevent their wider adoption? Do you think they will become new standards some day?

I think it is too early to say whether experimental forms of openness like open notebooks and open peer review will become new standards. But I would certainly hope that open peer review becomes the norm.

Concerning open notebook science (a method pioneered by an organic chemist), I would think areas like chemistry and the medical sciences offer the most potential.  Beyond that, I suspect experimental practices and methods will remain niche — a point I think Toma Susi conceded when I spoke to him recently regarding his decision to publish a grant proposal in the new Research Ideas & Outcomes (RIO) Journal.

However, this is a future that RIO certainly anticipates, describing itself as it does as a journal that plans to publish “all outputs of the research cycle, including: project proposals, data, methods, workflows, software, project reports and research articles together on a single collaborative platform, with the most transparent, open and public peer-review process.”

What we have learned is that receptivity to openness tends to be discipline specific. The success of arXiv, for instance, owes a great deal to a pre-internet culture that existed among the physics and mathematics communities in which they routinely shared preprints with one another in paper form.

In terms of barriers, I would say that if legacy publishers do manage to appropriate open access in the way I anticipate then they will act as a drag on innovation and greater openness. They have come to accept OA because at some point they realised it was possible to provide it in a way that allowed them to continue publishing their legacy journals, and in a way that protected their profits. To push openness further would I think threaten those profits, and indeed probably moot the continued existence of the journal.


There are several mature business models for open journals at the moment. However, we are not sure what is the best solution for books. Do you think that the Open Library of Humanities model (a consortium of institutions paying for publishing without APCs) is a good answer for this issue?

Open Library of Humanities publishes journals rather than books I think, but do I feel its model is appropriate for books? Personally, I think it is too early to be definitive about OA books, and that seems to be the conclusion reached by Jisc.

For a start, books are mainly published by humanists, and humanists are most resistant to OA. Moreover, it is far from cheap to publish a book. It is sobering to note, for instance, that legacy publishers charge authors around £10,000 to publish an OA book (i.e. Routledge), or even £11,000 (i.e. Nature).

This latter point would certainly seem to suggest that some kind of consortial model is appropriate. And this seems to be the model envisaged by Lever Press (although it has also talked of using an “unlocking” model similar to that pioneered by Knowledge Unlatched, which could be a little different perhaps).

Once again, however, I would be concerned about the degree of financial transparency provided by a consortial model, particular when dealing with a for-profit publisher. As I noted earlier, OA advocates long argued that having authors pay APCs would introduce market discipline, on the grounds that authors would shop around and make a buying choice based on price. This has not happened, partly because we have seen a move to the bulk buying of publication rights, which strikes me as being somewhat similar to a consortial model. So the question is how would a consortial arrangement provide the price discipline we see in a true market? If nothing else, I suspect this aim will be confounded by what one might call “prestige hunger”.

This came home to me recently when I spoke to a researcher determined to make her new book open access. I pointed her towards Ubiquity Press (which, by the way, still appears to charge £9,340 if you want your book typeset, copyedited and with language checking and book index services provided), and a couple of other new-style OA book publishers (including Punctum Books). Her response: “Well, my co-author is a junior research and he does not have tenure, so he really needs to build up his CV. No one is going to be impressed with a book published by an upstart OA publisher”.

As a result, she is currently planning to go with a legacy publisher that offers an OA pay-to-publish option.

This all suggests to me that there is need to give a lot more thought to how OA books could be published effectively.


All over the world there are many OA strategies and policies, not always coherent. Do you see any effective mechanisms for the future international coordination of OA strategies and policies?

We are seeing some attempts here, most notably perhaps with the PASTEUR4OA project in Europe. However as you indicate, what is needed is a global approach. UNESCO might seem to be an organisation that could take responsibility here. I know it sees open access as something it ought to take a leadership role in, and it has published a “policy guidelines” document. But I fear UNESCO is too bureaucratic, and easily mistakes meetings and reports for practical action. It is also hostage to geopolitical forces that inevitably limit its ability to act decisively.

In the end it is the purse holders who are best placed to take this on. One funder organisation that has taken a global stance on open access is the Global Research Council (GRC). Again, however, if you look at its 2013 “Action Plan towards Open Access to Publications” you will see that its main focus is on persuading publishers to adopt OA. To that end it talked of the need to develop “an integrated funding stream for hybrid open access”. Hybrid OA is now widely viewed as antithetical to an effective transition to open access, not least because it plays into the hands of legacy publishers. GRC’s approach is a further reminder that the disorganised state of the OA movement is encouraging governments and funders to turn to publishers for solutions, and not coming up with the right ones as a result. And as we have seen, publishers are very adept at infiltrating policy committees and working groups — as they did so successfully with the UK Finch Committee).

As far as effective mechanisms for international coordination go, therefore, one might want to be a little sceptical about possible outcomes.

In any case, as the reality of what it will cost to monitor and manage OA policies comes more sharply into focus funders and institutions are likely to conclude that it is much simply to continue outsourcing everything to publishers. Publishers have always argued that there is a danger that open access will lead to a great deal of wasteful duplicated effort. They are right of course, particularly when seeking global solutions. So the choice for funders would seem to be: subcontract everything to publishers, or try to get a disorganised research community to “reclaim ownership” of scholarly communication. Which would you opt for?

The good news is that the world will surely get open access. The bad news is that the research community will probably fail to resolve the affordability problem that led many to join the OA movement in the first place. More worryingly, open access could end up as a half-finished revolution.

As Vitek Tracz has pointed out, scholarly communication will not be fit for purpose in the networked world until the kind of developments he outlined when he spoke to me last year have been implemented, including the abandonment of the traditional journal.

Essentially, the focus needs to shift from, “How can we force researchers to embrace OA?” to, “How can we use the network to create a more effective and efficient scholarly communication system, one fit for the 21st Century?”

And in order to do that it would appear that the research community would have disintermediate legacy publishers. This could be by creating “overlay journals”, or developing a range of other new publishing initiatives in which the whole process is managed and controlled by the research community itself. Examples of the latter include the use of institutional repositories as publishing platforms, and the founding of new OA university presses like Collabra and Lever Press.

Governments could also do more to fund and support low-cost national and regional publishing platforms like SciELO, Redalyc, AJOL and CyberLeninka.

One would also need to see the editors of legacy journals following the example of the editorial board of Lingua, by declaring independence from their publisher and setting up a rival journal, something the board of Cognition is currently also considering. However, as Peter Suber has pointed out, while there is a history of such actions, they are rare. The fact is that while researchers are happy to shout the odds, and sign petitions like the Cost of Knowledge, it is far from clear that many are willing to walk the talk.

In the end, the key question is whether the research community has the commitment, the stamina, the organisational chops and/or the resources to reclaim scholarly communication. While I would love to end on a positive note, I am personally doubtful that it has. The fact is that, OA advocates aside, there does not appear to be much appetite in the research community for giving up publishing in prestigious journals, and abandoning the notorious Impact Factor. More importantly, university managers and funders do not want to see anything that radical occur. We live in an age of bureaucratic scrutiny, and scrutineers crave simple and standard ways of practising their dark arts. That is exactly what the IF and legacy journals provide. If I am right, OA will surely remain a half-revolution, for now at least.

Open Access Archivangelist: The Last Interview?

Our special guest today is Stevan Harnad, a prominent figure in the Open Access movement. Author of  the famous 'Subversive Proposal', founder of 'Psycoloquy' and the journal 'Behavioral and Brain Sciences', creator and administrator of AmSciForum, one of the main coordinators of CogPrints initiative – the list could be stretched far beyond that – he doesn't really need introduction for anyone not wholly a stranger to the story of the  Open Access movement. A cognitive scientist specialising in categorization, communication and consciousness, Harnad is Professor of cognitive sciences at the Université du Québec à Montréal and University of Southampton, external member of the Hungarian Academy of Sciences and doctor honoris causa, University of Liège. But even his polemics with John Searle about the Chinese Room didn't become as famous and influential as his Open Access advocacy.

Open Access Archivangelist

It's often said that it all began in 1994 with Harnad’s 'Subversive Proposal': a call to fellow academics to upload all their previously published research output to open access repositories, thus making it freely accessible to anyone with Internet access. Yet Stevan Harnad's adventure with open scholarly publishing began before that – as far back as 1978, when he founded the Open Peer Commentary journal 'Behavioral and Brain Sciences'. The journal was unique in the way it complemented traditional peer review with open peer commentary: copies of each accepted article were sent to about 100 experts in the fields it touched. Their short commentaries were then co-published with the target article along with the author's replies. As Harnad made clear in his 2007 interview with Richard Poynder, although the journal was published in the paper era and was thus technologically incapable of becoming anything close to what later became known as Open Access – it made Stevan wonder about ways in which more people could benefit from open peer commentary. So when in the middle of the 1980's Harnad was exposed to the emerging Usenet, his maturing ideas at last met the right technology. Harnad called the idea 'skywriting'. Open access to scholarly literature was then the only logical conclusion – a necessary condition for skywriting.

Stevan Harnad was with the Open Access movement from the very beginning, longer even than the term itself existed (the term 'Open Access' was introduced in 2002 with the Budapest Open Access Initiative). For a number of years he expressed his thoughts about the state of Open Access in particular and academic publishing in general in the American Scientist Open Access Forum (now the “Global Open Access List” (GOAL) as well as on his blog 'Open Access Archivangelism'.

100% Green Gratis OA

Harnad's long-standing advocacy for “Green” Open Access (OA) is well known. According to him, the fundamental priority is for academics to fill their institutional research repositories. Once all published research output is openly available via this “Green Road” without delay (embargo), academic publishing will have to be modified in order to survive. The emerging model (“Fair Gold”) will be Open Access, with journal publishers' roles reduced to their sole remaining essential function: managing peer review. Much of what we do toward attaining Libre (CC-BY) Gold OA before academic output reaches 100% Gratis (toll-free) Green OA is premature, redundant and may even delay the transformation of academic publishing to Fair Gold OA (playing into the hands of publishers who are trying to delay OA as long as they can).

'Retirement'

It seems, however, that the long era of Harnad's 'Archivangelism' for Open Access is coming to an end. Earlier this year, 22 years after the 'Subversive Proposal', Harnad made it quite clear via Twitter that he is about to quit Open Access Advocacy.

Tomasz Lewandowski contacted Stevan and asked him about his decision, its context and his plans for the future. Stevan was kind enough to give us an interview summing up his career as Open Access advocate.

 

The Interview:

Tomasz Lewandowski: Can you tell us a bit about your research?

Stevan Harnad: My research is on how the brain learns new categories and how that changes our perception as well as on the "symbol grounding problem” (how do words get their meaning?) and the origin and adaptive value of language. I also work on the Turing Test (how and why can organisms do what they can do? what is the causal mechanism?) and on consciousness (the “hard problem” of how and why organisms feel). Apart from that I work on open-access scientometrics (how OA increases research impact and how OA mandates increase OA). I also edit the journal Animal Sentience: An Interdisciplinary Journal on Animal Feeling and I am beginning to do research on animal sentience and human empathy.

Concerning your now rather famous tweet about your retirement as Open Access Archivangelist: what was the context of this decision? What's next?

The context (if you look at the tweet conversation) was that Mike Eisen (co-founder of PLoS) was implying that copyright law and lawyers consider the requesting and receiving of reprints or preprints to be illegal. I think that is nonsense in every respect. It is not illegal, it is not considered illegal, and even if it were formally illegal, everyone does it, preventing it would be unenforceable, and no one has challenged it for over a half century!

So I replied that this was wishful thinking on Mike’s part. (He is an OA advocate, but also co-founder and on the board of directors of a very successful Gold OA publisher, PLoS. There is a conflict of interest between publishers (whether they be TA [Toll Access] publishers or OA publishers) and the advocates of Green OA or eprint-sharing. So what I meant was that Mike was wishing it to be true that eprint-sharing was somehow illegal, and hoping that it would not happen, as it conflicts with the interests of getting researchers to pay to publish in Gold OA journals -- rather than to publish in TA journals and self-archive -- if they want OA).

Mike replied (equally ironically) that I was the champion of wishful thinking. At first I was going to reply in the same vein, with a light quip. But then I decided, no, it’s true: I had long wished for all refereed research to be Green OA, and my wish has not been fulfilled. So I simply stated the fact: That he is right, I have lost and I have given up archivangelizing.

If it turns out that the wish is nevertheless fulfilled eventually, all the better. But if it is overpriced Gold OA (“Fool’s Gold”) that prevails instead, well then so be it. It’s still OA.

My own scenario for a rational transition to “Fair Gold” OA via Green OA has been published and posted many times, and it may eventually still turn out to be the path taken, but for the past few years I find that all I am doing is repeating what I have already said many times before.

So I think suffering animals need me much more than the research community does. This does not mean I will not be around to say or do what needs to be said or done, for OA, if and when there is anything new I can say or do. But the repetition I will have to leave to others. I’ve done my part.

Bekoff, M., & Harnad, S. (2015). Doing the Right Thing: An Interview With Stevan Harnad. Psychology Today.

Was there any point during your time as an OA activist that you felt it is all going in the right direction and your vision will soon become true?

Quite a few times: First in 1994, when I made the subversive proposal; I thought it would just take a year or two and the transition to universal self-archiving would be complete. Then I thought commissioning CogPrints would do the trick. Then making CogPrints OAI-compliant. Then creating OA EPrints software; then demonstrating the OA citation advantage; then designing Green OA mandates by institutions and funders; then designing the copy-request Button; then showing which mandates were effective; then debunking Fool's Gold and the Finch Report.

But now I see that although the outcome is optimal, inevitable and obvious, the human mind (and hand) are just too sluggish and there are far, far more important things to devote my own time to now. I've said and done as much as I could. To do more would just be to repeat what has already been said and done many times over.

Carr, L., Swan, A. & Harnad, S. (2011) Creating and Curating the Cognitive Commons: Southampton’s Contribution. In, Simons, Maarten, Decuypere, Mathias, Vlieghe, Joris and Masschelein, Jan (eds.) Curating the European University. Universitaire Pers Leuven 193-199.

In an interview you gave in 2007 to Richard Poynder you draw a vision of something one might call an intrinsic history of ideas of Open Access. In the beginnings of the modern Open Access movement there were, according to what you said, two main streams of thought. One stream was concerned with accessibility to scholarly literature, the other - with its affordability. In the former stream, one can position your BBS and arXiv and other early OA initiatives. In the second stream, there is Ann Okerson and her efforts to make scholarly literature more affordable for universities (though perhaps not for the broader public). And although not primarily concerned with Open Access, the search for a more affordable scholarly journal financing model eventually led to the APC model (paid Gold OA). So then the accessibility movement became the Green Road to Open Access and the affordability movement - the Gold Road to Open Access.

Now, when things are put this way, I think we can see the inner tension of the Open Access movement more clearly. A possibility arises that these two streams within the Open Access movement were not that compatible. Could you elaborate on that? If you look at the Open Access Movement as an offspring of two separate problems: one related to the accessibility of the scholarly literature and the other related to its affordability, do you think that OA was ever really a single, coherent movement at all?

First of all, two pertinent details: (1) the APC (paid Gold OA) cost-recovery model was already there, explicitly, in the 1994 Subversive Proposal – as was the assumption that universal Green OA self-archiving must come first. (2) Ann Okerson was not particularly an advocate of APCs as the solution to the affordability but of licensing.

Affordability is just an aspect of the accessibility problem: If there were no accessibility problem -- if there were no need for all researchers to have access to all research, or if they somehow already had it -- then affordability would not be a problem, or a very minor one. Conversely, if there were no affordability problem, then accessibility would not be a problem.

But affordability was always primarily a problem experienced directly by institutional librarians (the "serials crisis") whereas accessibility was a problem experienced directly by researchers. The solution for affordability seemed to be lower journal prices whereas the solution for accessibility was for researchers to provide to their final refereed drafts the open access that the online era had made possible -- by self-archiving them in their institutional repositories (i.e., what came to be called "Green OA").

The ultimate solution, of course, was (1) universal Green OA self-archiving followed by (2) universal journal subscription cancellation by institutions, (3) the cutting of all obsolete journal products and services and their costs by publishers, and (4) a transition to author-institutional payment for the remaining essential cost (managing peer review) up front (what came to be called "Gold OA").

But this optimal Gold outcome was from the very beginning (already in the 1994 Subversive Proposal) predicated on first providing universal Green OA as the source of the access and the driver of the cancellations, downsizing, and conversion to Gold OA. Without providing Green first, the only way to get to Gold OA is to pay the inflated price of pre-Green "Fool's Gold" OA, which does not solve the affordability problem, leaving all obsolete products and services bundled into the inflated price per article. And even the notion of a global "flip" of all the planet's journals to Fool's Gold OA is obviously incoherent to anyone who thinks it through.

So the rush for a pre-emptive solution to the affordability problem has become a Fool's Gold Rush. Only if institutions and funders first mandate Green OA globally can there be a viable, stable transition to affordable, scalable, sustainable "Fair Gold" OA.

Harnad, S (2014) The only way to make inflated journal subscriptions unsustainable: Mandate Green Open Access. LSE Impact of Social Sciences Blog 4/28.

You once defined Open Access as 100% Open Access - meaning it's either 100% or not at all, because only 100% will make the traditional publishers fall. This definition is fair enough but in reference to some of the previous questions I think we might rather need an operational one. So let me ask you - what would need to happen for you to say "Hey, today we have Open Access in the academic world"?

See above: "Only if institutions and funders first mandate Green OA globally can there be a viable, stable transition to affordable, scalable, sustainable "Fair Gold" OA". Without 100% Green OA, journals are not cancellable.

Piece-wise local transitions to (Fool's) Gold OA (by country, institution, funder, field or publisher) not only add to the overall costs of access while subscriptions continue everywhere else, but they divert attention from what really needs to be done, which is for all funders and institutions to mandate Green OA (with deposit required immediately upon acceptance for publication plus either immediate OA or the copy-request-Button). In contrast, unlike Fool's Gold OA, Green OA can be mandated piece-wise (by country, institution or funder).

And of course publishers know all this, which is why they are putting all their efforts into embargoing Green OA, trying to force those who want OA to pay for Fool's Gold instead.

Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2014) Open Access Mandates and the "Fair Dealing" Button. In: Dynamic Fair Dealing: Creating Canadian Culture Online (Rosemary J. Coombe & Darren Wershler, Eds.).

What will the world of scholarly communication look like after 100% Open Access is established? How would this influence the entire model of scholarly communication?

Once Green OA is universally mandated and provided, there will be the transition to Fair Gold OA, with peer-review being the only remaining service provided by publishers, and paid for by institutions out of a fraction of their subscription cancellation savings. Once research papers are all open and text-minable, open data will soon follow, and with it open science. The rate of progress and collaboration in research will be greatly enhanced and we will have a rich battery of OA metrics for monitoring and measuring research progress, productivity, and currents of influence.

Harnad, Stevan (2013) The Postgutenberg Open Access Journal (revised). In, Cope, B and Phillips, A (eds.) The Future of the Academic Journal (2nd edition). 2nd edition of book Chandos.

Your phrase stating that Elsevier was "on the side of angels" when it comes to embracing OA made a career of its own. You once took that position in 2007 after Elsevier's policy on Green Open Access was introduced. You still maintained it even when the Cost of Knowledge boycott was in its peak phase. Even when in 2013 Elsevier excluded from its policy researchers that were under institutional mandates you "continued to attest that". As far as in 2015 Michael Eisen still seemed to hold a grudge toward you for this statement. Could you once more recall the context in which this phrase was coined and what exactly did it mean? Do you still continue to attest that?

The "side of the angels" quip was always a ruse, designed to keep Elsevier from trying to embargo Green OA for as long as possible by throwing them a token credit to use as PR amidst their onslaught of blame (from librarians and authors). Elsever knew it, I knew it, and so did anyone else with a realistic sense of what was going on, and what was at stake.

I also did not believe in boycotts (and their failure every time has borne me out) but in mandates (though they have not yet prevailed as I had hoped either).

But much less trivially, although it was just as obvious and inevitable as OA that publishers would use every trick possible to try to stave off Green OA for as long as possible, it should be obvious that publishers are not the real obstacles to OA. The real obstacles are precisely the ones who will benefit directly from OA the most: researchers. (The biggest indirect beneficiary is of course the tax-paying public that supports the research and researchers.)

If researchers worldwide had not been so sluggish, timid and obtuse, and had provided Green OA of their own accord as of 1994 (as computer scientists and physicists had already been doing then for over a decade, taking advantage of each new online means of providing OA as it appeared, completely oblivious to what publishers might think or say about it) then we would have long reached the optimal and inevitable by now.

But most researchers didn't. So we are still busy adopting OA mandates (many of them weak, hence ineffectual) and trying to get their details right:

Vincent-Lamarre, P, Boivin, J, Gargouri, Y, Larivière, V & Harnad, S (2016) Estimating open access mandate effectiveness: The MELIBEA Score. Journal of the Association for Information Science and Technology (JASIST), 67.

On the one hand, big legacy publishers are embracing Open Access more and more - by introducing open access options to their old journals, by establishing policies for self-archiving and by creating new open access journals. All this has been happening ever since Springer bought BioMed Central back in 2008. On the other hand, their revenues have stayed as high as before, or they've even increased. You yourself wrote many posts concerning the phenomenon of "double-dipping". In reference to your answer to the last question - could you comment more broadly on big for-profit scholarly publishers and their relation to Open Access? Maybe you have some predictions about the nearest future of the business, which you would like to share?

I continue to believe that it is virtually irrelevant what publishers say or do. The sole retardant is researchers; their institutions and funders can ensure that they do the right (optimal, inevitable) thing -- though it is too late now to get them to have done it as soon as it was possible!

Publishers' Fool's Gold OA options are just distractions, designed to delay the optimal and inevitable outcome for as long as possible (and publishers know this full well).

So it all depends on how soon effective Green OA mandates by institutions and publishers get adopted globally: Only the universal availability of Green OA will make journal subscriptions cancellable, thereby forcing publishers to cut all their remaining obsolete Gutenberg-era products and services (like the print edition, the online edition, archiving and access-provision) and their costs, downsize to the sole remaining essential service of PostGutenberg peer-reviewed journal publishers (namely, the management of the peer review, which researchers provide for free, just as they provide their research for free), and convert to Fair-Gold OA fees in order to recover its minimal costs.

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16 (7/8).

You said that even should the overpaid "Fool's Gold" prevail, it would still be Open Access. So (in reference to the two streams of the Open Access Movement) - is it for you accessibility above affordability? Accessibility no matter the costs?

 If the planet were to opt for universal Fool's Gold instead of mandating Green OA and attaining OA plus Fair Gold that would be fine with me. Fools will always be parted needlessly from their money, and my only real objective all along was universal OA, as soon as possible.

The reasons I oppose and mock the Fool's Gold Rush, however, are precisely the ones I've described: Fool's Gold is bloated, unscalable, unaffordable and unsustainable -- hence (I infer) unattainable. And the result is that it is diverting attention and energy from the only route to universal OA that I believe will work, and that route is Green OA self-archiving, mandated globally by all research institutions and funders.

There are numerous Open Access policies all around the world introduced by either research organisations, funders or various other stakeholders. ROARMAP currently lists over 700 policies. Back in 2007 you seemed to think all is needed for 100% Green Open Access are open access mandates. Now we have legal Open Access mandates introduced and working in practice - and there are over 500 more of them than back in 2007. What makes you think we still aren’t any closer to our goal after all?

We are closer, but not nearly as close as we could and should be, because there are still far from enough Green OA mandates, and many of them are needlessly weak and ineffectual.

What does an ideal, strong and effectual OA mandate look like? Are there any mandates like that out in the wild?

The essential features of an effective Green OA mandate are the following.

(1) It must require deposit immediately upon acceptance for publication (not after an embargo).
(2) It must require deposit of the author's refereed, accepted final draft (not the publisher's PDF).
(3) It must require deposit in the author's institutional repository (not institution-externally).
(4) Immediate deposit must be made a prerequisite for research performance evaluation.
(5) The repository must implement the copy-request Button.
(6) The immediate-deposit need not be immediate-OA (as long as the Button is implemented).

Harnad, Stevan (2015) Open Access: What, Where, When, How and Why. In: Ethics, Science, Technology, and Engineering: An International Resource eds. J. Britt Holbrook & Carl Mitcham, (2nd edition of Encyclopedia of Science, Technology, and Ethics, Farmington Hills MI: MacMillan Reference).

As far back as 10 years ago you thought that progress in self-archiving is far too slow. In the paper "Opening Access by Overcoming Zeno's Paralysis" you diagnosed the academic community as overhelmed by what you called "Zeno's Paralysis". Therefore, as could be understood, you maintain the position that the problem is psychological in nature. There are others, however that maintain that the problem is more systematic in nature. The whole "publish or perish" scientific communication system that emerged over the last few decades has too many intrinsic incentives that guide researchers in wrong directions and too few incentives that would direct them towards OA. In the perspective of years gone by, has your diagnosis changed? Who is to be blamed - the scientists or the system they work in?

The ones to blame are (1) the scientists themselves, for not providing Green OA of their own accord, unmandated, and (2) their institutions and funders, for being so sluggish in mandating it, and so slow to optimize their mandates.

The systemic problems of research funding, publication and assessment (peer review, publication lag, publish or perish, impact factors, research evaluation, data-mining, re-use licensing, etc.) are real enough, but they are not access problems -- and it was, and continues to be a big mistake to conflate them with the much simpler, focused problem of providing immediate toll-free online access to refereed research to all would-be users.

You mean researchers specifically as researchers or researchers as human beings? If it’s only about being a researcher, than what makes this particular group as sluggish as, according to you, they now are? Yet you said previously it’s about “human mind”.

I'd change it now from "human mind" to "academic" mind (though the more general case can probably be made too, as an academic is merely a human being in a certain kind of profession...).

I have to confess that I don't understand why it's taking academics so long. Some say it's because they are already overworked, but I think that's a self-serving view and probably not true about most academics. Besides, self-archiving takes next to no time per paper (and even with "publish or perish" academics don't publish that many papers per year!)

But I've taken a stab at trying to diagnose and catalogue the many causes of "Zeno's Paralysis" in the BOAI self-archiving FAQ. There are at least 38 of them at last count. The top two are laziness and fear of publishers.

Harnad, S. (2006) Opening Access by Overcoming Zeno's Paralysis, in Jacobs, N., Eds. Open Access: Key Strategic, Technical and Economic Aspects, Chapter 8. Chandos.

Nowadays you don’t hear about plain Open Access that often. I mean the term is still largely used and is a highly recognisable mark. Yet it is often either traded for “Public Access - especially often when gratis Open Access was meant - or, when libre Open Access comes into the consideration - incorporated into Open Science. Is Open Access, according to you, an inseparable part of Open Science (as Open Data perhaps)? Or is it rather a separate goal, affordable by separate means and you think it is being thrown into a big, loose and fuzzy bag labeled “Open Science” for purely rhetorical reasons?

Not only is universal toll-free online access to refereed research (OA) -- Gratis, Green OA -- the first and foremost goal, but it is and has long been completely within the research community's immediate reach. It just has not been grasped. We cannot have Libre OA and CC-BY till we first have Gratis OA. And we cannot have Open Science without Libre OA and CC-BY. And Open Data, even if CC-BY, are of limited use if the refereed articles based on them are not OA.

So just as the optimal and inevitable outcome has been delayed by the pre-emptive Fool's Gold Rush, so it has been delayed by trying to reach pre-emptively for Libre OA, CC-BY and Open Science without first troubling to mandate universal Green, Gratis OA. (I’ve called this “Rights Rapture.”)

Harnad, S. (2013). Worldwide open access: UK leadership? Insights, 26(1).

Apart from Open Access, what do you mainly do as a researcher?

My research is on how people acquire categories. To categorize is to do the right thing with the right kind of thing: to approach it, avoid it, eat it, mate with it, manipulate it, name it, describe it. Categories are kinds, and our brains need to find the features that distinguish the members from the nonmembers of each category relevant to our survival and success.

I say “our brains” do it because often we categorize without knowing how we are doing it. My field, cognitive science, is devoted to "reverse-engineering" the mechanisms in our brains that generate our capacity to do all the things we can do. The ultimate goal is to create a model that can pass the Turing Test, a model that is able to do all the things we can do.

In the lab people learn new categories (e.g., new kinds of shapes) by trial and error, with feedback signaling to them whether they are right or wrong. We measure what is going on in their brains as they learn, and we also model the process with computer-simulated neural networks that are trying to learn the same categories.

We share the capacity to learn categories from direct trial-and-error experience with many other species, but that is not the only way to learn categories. Our species is unique in that we can also learn categories verbally: Someone else who knows which features distinguish the members from the non-members of a new category can tell us. Almost all the words in a dictionary are the names of categories. And every word in a dictionary is defined. So if there is a word whose meaning you don't know, you can look up its definition. But what if you don't know the meaning of the words in its definition? You can look those up too. But it can't continue like that indefinitely, otherwise you would eventually cycle through the whole dictionary without having learned anything. This is called the "symbol grounding problem." Some of the meanings of some words, at least, must have been grounded the old way that we share with all other species -- via the direct trial-and-error experience we study and model in the lab -- in order to ground the meaning of a new category learned though verbal definition alone.

How many words need to be already grounded in experience -- and which ones -- so that all the rest can be learned from verbal definition alone? This is another problem we work on, by doing graph-theoretic analysis of dictionaries. The number is surprisingly small, under 1500 words for the biggest dictionaries we have analyzed so far. The grounding words tend to be learned earlier, more frequent and more concrete than the rest of the words in the dictionary. We think this may also provide some clues about the evolutionary origin of language as well as its adaptive function: Language is what allowed our species to acquire infinitely more new categories than any other species, and far more quickly and safely, by combining the names of the already grounded ones into definitions or descriptions of new ones, conveyed by those who already know the new category to those who do not. It is also what made science possible -- and it is also what led to Open Access. If 300,000 years ago we had “charged” one another a toll for access to information about new categories, language would never have evolved. (Nor would money!)

The connections between my research on the two ways of acquiring categories and the need for open access was mapped out in an interview with Richard Ponder a decade ago.

Poynder, R. & Harnad S. (2007) From Glottogenesis to the Category Commons. The Basement Interviews.
Blondin-Massé, A., Harnad, S., Picard, O. & St-Louis, B. (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In: Lefebvre C, Comrie B & Cohen H (Eds.) Current Perspective on the Origins of Language, Benjamin.

Turing Test: effective or not? Does written communication on unspecified subjects have to involve processes we could safely call cognitive or will chatbots stay what they are today? And today they are apparently much like what chess playing algorithms are: essentially a set of clever heuristic rules and vast libraries of optimal movement sequences. ELIZA wasn’t even that (meaning it had no library and the heuristic rules weren’t that clever) and still it fooled quite many human testers.

The Turing Test is not a 10-minute chatbot test. Nor is it about "fooling" anyone. It is a scientific attempt to reverse-engineer cognition: to discover its underlying causal mechanisms. Turing's criterion is performance capacity. The model has to be able to do anything and everything a normal human can do, indistinguishably from a human (for a lifetime, if need be). Turing's insight is that if the mechanism can do everything we can do, indistinguishably from any of us, then we have no better or worse reason for affirming or denying that it has a mind than we have for affirming or denying it of any of us.

But the Turing Test comes at several levels. The best-known one, "T2," is Turing-indistinguishable verbal capacity (tested via email only). But we have many other capacities, and our verbal capacities are almost certainly grounded in them, as I described above: "T3" requires Turing-indistinguishability not just in verbal capacity, but in the capacity to interact with the world of things that words refer to. Hence T3 is Turing-indistinguishability in robotic (sensorimotor) capacity. (One can also require T4, Turing-indistinguishability in neural activity inside the head, but this is probably needlessly over-demanding.)

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion. SIGART Bulletin 3(4) (October 1992) pp. 9 - 10.
Harnad, S. (2014) Turing Testing and the Game of Life: Cognitive science is about designing lifelong performance capacity not short-term fooling. LSE Impact Blog 6/10 June 10 2014.

You acknowledge Turing’s statement that T2 and T3 are exactly what we do with other human beings in order to know whether they have minds or not. Yet you say that the test lasts for an entire lifetime, if needed. Does it mean we can’t really be sure if other humans have minds? Is assuming another person’s rationality just a courteous convention?

No, the lifetime capacity is just to rule out short-term tricks that really do just fool people. The Turing Test has two criteria: The first is that the model has to have our full performance capacity; the second is that we cannot tell it apart from a real person exercising that capacity. People can be fooled in the short-term, so it's important that neither the test nor the capacity be just short-term. But in practice I think that any robot that could interact with us (and the world) indistinguishably for a few days would probably be able to do it for a lifetime (i.e., it would probably have our full capacity).

In his "Return from the Stars" Stanisław Lem episodically pictures a vision of a robot cementary. Various dysfunctional "automatons" (as he called them) are awaiting in a kind of giant hangar to be melted back to recyclable materials. The process is fully automated and supervised only by other robots. When the protagonist (an astronaut that came back from a 10 years long voyage to Fomalhaut and due to the time dilation phenomenon faces a brave new world 127 years later back on Earth) strays into the hall, he witnesses a large spectrum of the dysfunctional robots' behaviours. One appears to be particularly resourceful and in order to avoid the melting desperately poses as a man wrongfully taken to be a machine. Another one is apparently praying.

You're a cognitivist and much of your research was devoted to problems of Artificial Intelligence. If AI would show such behaviour as portrayed by Lem, would you fight for the rights of robots as well as for the rights of animals? Even if after all Searle was right and they all came out to be just "Searle's chinese rooms"? 

Yes, if ever we have robots that are T3-indistinguishable from us I would conclude that they are, like us, sentient, and I would fight for their right to be free of needless human-inflicted suffering.

But are you not aware of the irony of segueing into speculations about science fiction when there is a stark reality very much like this one that is transpiring, unopposed, everywhere, at every moment, as we speak, not with robots but with living, feeling members of species other than our own?

Harnad, S. (2014) Animal pain and human pleasure: ethical dilemmas outside the classroom. LSE Impact Blog 6/13 June 13 2014.

In the Cambridge Declaration on Consciousness the mirror test is mentioned as an important way to distinguish between certain classes of animals according to their intelligence. As a cognitivist, what can you tell a layman about those animals that “pass” the mirror test?

It is actually trivial to design a robot that can pass the mirror test (locate and manipulate parts of its body using the mirror) and it certainly does not mean that the robot is conscious. To be “conscious,” by the way, means to feel. And even animals that don't recognize themselves in the mirror feel (i.e., they are sentient). So I consider the mirror test as just a test of some higher-order cognitive capacities. The real question is whether an entity feels. There is as much evidence that other mammals and other higher vertebrates feel as there is that preverbal children feel. And almost as much evidence that lower vertebrates and invertebrates feel.

Harnad, Stevan (2016) Animal sentience: The other-minds problem Animal Sentience 2016.001.

Can AI play any significant role in Open Science? Or maybe it is playing one now?

It could, if we had open science (but we don't!). AI and deep learning are already being applied to data-mining the tiny fragment of the scientific corpus that is online and open, but there is much more to come -- once we have OA.

Dror, I. and Harnad, S. (2009) Offloading Cognition onto Cognitive Technology. In Dror, I. and Harnad, S. (Eds) (2009): Cognition Distributed: How Cognitive Technology Extends Our Minds. Amsterdam: John Benjamins.

Now the other way round. Can Open Science (or maybe some particular aspect of it; Open Data for example) play a significant role in AI development?

Of course it could -- just as it could play a significant role in the development of any area of science. But for that you need open science. And for Open Science, we first need OA...

So instead of dreaming about the potential benefits of OS, we should first grasp the Green Gratis OA that has long been within our reach.

Without being too specific we might say we are witnessing a certain decline in liberal democratic trends all over the world. We could even speak of a crisis of the liberal democratic model. Can this situation influence the way the current science system works? And if yes, then to what degree?

The most flagrant example of this among the liberal western democracies is transpiring right now, in the heart of the EU, in my country of birth, Hungary. (Your own country, Poland, alas looks like the next one to follow suit.)

The current Hungarian regime's first attempt at an assault on science in 2011 failed, fortunately, but it's a fair harbinger of what is in store for science and scientists if the anti-democratic regimes' assault on democracy and human rights is not successfully resisted.

Nevertheless, in the end you didn't wholly abandon Open Access. You are currently engaged in scientometrics of Open Access publications. Could you make our readers more familiar with this branch of knowledge? Is there something you learned lately in this area which might change your view on Open Access?

Metrics has its proponents and its detractors. But if you think it through, what Bradley said of metaphysics -- "the man who is ready to prove that metaphysics is wholly impossible... is a brother metaphysician with a rival theory" -- is just as true of metrics: Metrics just means measures. Academics don't like having their performance evaluated by metrics like publication counts or citation counts (they don't like being evaluated at all) but we can only gain if we enrich our repertoire of metrics as well as validating their predictive power against a face-valid criterion -- which in the case of research evaluation might be peer rankings (another metric!).

The OA corpus offers the potential for measuring and validating many new metrics, field by field, including: (1) download counts, (2) chronometrics (growth- and decay-rate parameters for citations and downloads), (3) Google PageRank-like citation counts, recursively weighted (citations from highly cited articles or authors get higher weights), (4) co-citation analysis, (5) hub/authority metrics, (6) endogamy/exogamy metrics (narrowness/width of citations across co-authors, authors and fields), (7) text-overlap and other semiometric measures, (8) prior research funding levels, doctoral student counts, etc.

But for all this, we need one thing first: Universal Green OA.

Let's say I haven't abandoned OA. I've just had my say (many times over) and am now waiting patiently for the global research community to get its act together.

Harnad, S. (2008) Validating Research Performance Metrics Against Peer Rankings. Ethics in Science and Environmental Politics 8 (11) doi:10.3354/esep00088 The Use And Misuse Of Bibliometric Indices In Evaluating Scholarly Performance.

You have lately devoted yourself to the fight for animals' rights. Can I ask you about the philosophico-ethical background of this part of your activities? What's your main argument for the animal rights?

The only animal "right" for which I am fighting is the right of sentient organisms to be free of needless human-inflicted suffering. And it is not an abstract philosophical issue but the greatest crime of humanity. (Humanity's greatest crime against humanity was the Holocaust. But its greatest crime tout court is the Eternal Treblinka we are inflicting on nonhuman animals.)

Why is it the greatest crime? Because we do it even though (at least in the developed world today) the horrors we inflict on animals are necessary neither for our survival nor for our health. And they are indeed horrors: indescribable, unspeakable, unpardonable horrors.

There is no horror that we inflict on nonhuman animals that we have not also inflicted on humans. But the fundamental difference is that we have decided that inflicting these horrors on humans is wrong, we have made them illegal, and all but sociopaths and sadists would never dream of inflicting them on people -- whereas inflicting them on animals is not only legal, but most of the human population acquiesces and collaborates in inflicting them, by demanding and sustaining the resulting products.

The only hope for the countless tragic victims of this crime of crimes is that the decent majority, once it is made aware of two fundamental facts -- (1) the true horror of what it entails and (2) that it is totally unnecessary -- will realize that it is wrong, just it did with rape, murder, violence and slavery, and will renounce and outlaw it, just as it did with rape, murder, violence and slavery.

Rather than continuing to bang on about OA (which is a foregone conclusion in any case, and only a matter of time), I want to devote my efforts to hastening the end of this monstrous animal agony, inflicted needlessly by humans, and far more urgent (for the victims). Ironically, part of the solution here turns out to be an open access matter too: CCTV cameras, videotaping the horrors in the slaughterhouses and web-streaming the evidence openly online, for public inspection through crowd-sourcing.

Harnad, S (2015a) To Close Slaughterhouses We Must Open People's Hearts. HuffPost Impact Canada July 2 2015.
Harnad, S. (2015). Taste and Torment: Why I Am Not a Carnivore. Québec Humaniste.
Patterson, C. (2002). Eternal Treblinka: Our treatment of animals and the Holocaust. Lantern Books.

Academic discourses have been shaped by the material forms of dissemination

When humanists say to me: you're just making us change our practices because technology has changed, I get a bit jumpy for two reasons. Firstly, the new technology of open, non-rivalrous dissemination, is much more like the things we are trying to do with scholarship than rivalrous forms. Secondly, such an argument assumes that paper, books, the codex, and other material forms are not themselves some kind of technology that has determined our practices. Nobody ever talks, really, or at least not enough, about the way in which academic discourses have been shaped by the material forms of dissemination within which they have existed for most of their lives.

 

Martin Eve is a Professor of Literature, Technology and Publishing at Birkbeck, University of London. He founded the Open Library of Humanities, a charitable organisation dedicated to publishing open access scholarship. He is also a steering-group member of the OAPEN-UK project, a research project gathering evidence on the open access scholarly monograph publishing in the humanities and social sciences. He is developing several digital humanities projects. 

 

Michał Starczewski: You are the author of the book “Open Access and the Humanities”. What differences are there between the OA revolution in the humanities and in the sciences?

Martin Eve: The usual way in which open access is framed in the humanities is that it “lags behind” the sciences, but this creates a number of new problems. Why, some humanists ask, should the humanities just follow whatever the natural sciences are doing? Others ask why technological change should drive academic practice. Another set fear the influence of open licensing, which they claim may promote plagiarism (I do not believe this). Still others point to the problem of economics: far less work receives funding in the humanities and Article Processing Charges (APCs) are not readily available. Finally, others point to the fact that there isn't actually a straightforward divide between “the humanities” and “the sciences”, even on OA. Indeed, the discipline of chemistry is very poor at open access while philosophy has had a culture of pre-prints for some time.

So, there are differences in what humanists do and how it is communicated, but I often feel these are overstated. We all write because we want to be read and we know that paywalls pose a barrier to broader readership. That said, we do have a culture of monographs in the humanities that are substantially harder to make open access than articles and journals...

The discourse and practice in OA is focused on articles and journals. Meanwhile, for researchers in the humanities, monographs are often much more important  than articles. One might say that the main conclusion from the Jisc OAPEN-UK final report on OA monograph publishing is that it is too early to recommend any specific model. What are the obstacles?

While I don't have the space here to go into every piece of detail, there are a set of social and economic challenges around monographs that were extremely well explored in a recent report by Geoff Crossick for HEFCE in the UK. Central to these challenges are the economics. A separate report recently issued in the USA for the Andrew W. Mellon Foundation found that the cost was “$30,000 per book for the group of the smallest university presses to more than $49,000 per book for the group of the largest presses.” At this type of cost, it becomes very difficult to support a model such as a Book Processing Charge (borne by the author/institution/funder). There is also the thorny problem of trade books, the still-underexplored issues of how OA books are used (in comparison to print), and the reticence of some tenure and promotion committees to admit born-digital manuscripts.

You have founded the Open Library of Humanities, a charitable organisation dedicated to publishing open access scholarship with no author-facing article processing charges (APCs). Could you explain how it works? Could it be a model for other institutions across the world? Are you going to publish monographs in this model as well?

The OLH works on a model of distributed library subsidy. So, instead of an author paying us ~$600 when an article has been accepted, we instead solicit contributions from libraries around the world that look like (but are not) a subscription. Libraries currently pay around $1000 per year to support our 15 journals. However, everything we publish is open access, so libraries are not “buying access” or a subscription or anything like that. They are supporting a platform that could not otherwise exist. It is a non-classical economic model but it seems to be working as around 200 libraries have currently signed up and we have seen a 100% renewal rate in our second year. We do intend to move to monographs, but this is further off. We are more interested, for now, in flipping subscription journals away from a paywalled mode and into our model. This can be achieved by journals either leaving their current publisher, or by us covering the APCs of that journal in the future. In this way, we get around the funding problems in the humanities for OA.

After the “Finch Report” the UK turned towards the Gold Route of OA. The findings of the monitoring of this policy are as follows: the majority of articles have been published in the most expensive, hybrid journals. The Wellcome Trust reported that 30% of the articles for which they had paid processing fees, were not available when the Trust checked. What went wrong with the OA policy in the UK?

I don't really think it's fair to judge whether policies have “gone wrong” at this stage and it depends upon what you wanted to achieve in the first place. If the goal was to achieve OA and for it to be cheaper than a subscription model, then yes, there are some problems emerging here. But if the goal is to achieve open access, even if it costs more, then the policy is working well. I personally think that, in the long run, we need a system that is more sensitive to the budgetary pressures of academic libraries (and I believe that academic publishing should be a not-for-profit enterprise). But the different policies in the UK – the gold RCUK policy and the HEFCE green policy – are combining to create a culture where OA is the norm. To say that these policies haven't worked after four years (RCUK) and six months (HEFCE) is a little rash.

How do you see the future role of scientific publishers in the context of OA? Do researchers need publishers to organise peer review and ensure high quality?

I tend to think about publishing in terms of the necessary labour here. I do not support the idea that, under capitalism, people should work for free. If people are performing a service, then they deserve to be remunerated for that. The labour in publishing, therefore, is labour like any other. Publishers perform a variety of tasks that I think it would be foolhardy to discard and that requires payment: peer-review organization, typesetting, proofreading, copyediting, digital preservation, platform maintenance, marketing, legal advice, identifier assignments, curation, the list goes on. Whether or not these “ensure high quality” is something of which I'm unsure. I regard peer review with deep scepticism and believe that it is more often a panacea than a rigorous gatekeeping method. Indeed, I recently wrote about the problems of predictive excellence with a group of others.

Do you think that the open data issue is as important in the humanities as in other disciplines? Is it a feasible scenario that humanities will be based on digital data? Are we witnessing a “digital turn”?

What's interesting here, I think, is that the term “data” is not well understood in the humanities. It implies a type of processing of quantitative material that most humanists don't encounter. Yet, at the same time, we all work with artefacts that could be called “data”. So, when I'm speaking with colleagues about this, I tend to use the word “evidence” or “paratext” to refer to data. I say, if you are writing about a nineteenth-century novel and you made a series of notes on this, the novel itself and your notes could both be considered data and might be valuable to someone else. That said, data sharing is controversial in many disciplines, so the fact that the humanities haven't leapt upon this is nothing to alarm us for now.

Is openness a necessary feature of the digital environment in the humanities?

It is not, sadly. As is evidenced by the fact that people have put up paywalls online around research material, it is perfectly possible to operate a closed digital environment for the humanities. That said, there is something interesting about this that always strikes me (drawing on the astute remarks of Peter Suber in his book, Open Access, from MIT Press). Knowledge, ideas and words are infinitely copyable without the original owner every losing them. If I tell you something that I know, then you know it too and we are both richer. Digital technology that allows infinite copying is directly in line with this way of thinking. So when humanists say to me: you're just making us change our practices because technology has changed, I get a bit jumpy for two reasons. Firstly, the new technology of open, non-rivalrous dissemination, is much more like the things we are trying to do with scholarship than rivalrous forms. Secondly, such an argument assumes that paper, books, the codex, and other material forms are not themselves some kind of technology that has determined our practices. Nobody ever talks, really, or at least not enough, about the way in which academic discourses have been shaped by the material forms of dissemination within which they have existed for most of their lives.

e-Infrastructure is not always interoperable. The information often can’t be distributed among different tools. The problem is serious when researchers work with GLAM (cultural institutions such as galleries, libraries and museums) resources. Is it possible to use common standards that make e-infrastructure interoperable? What are the main obstacles to using such standards?

It is, of course, possible to create common standards for e-infrastructure. However, the challenge here is that we have a highly distributed set of actors all with different end goals. Does Elsevier see itself as benefiting from working in an open, interoperable way, in the same way as a small, born-open-access book press? Does Wiley get the same benefit from being interoperable as an institutional repository? I'd argue that different stakeholder desires condition the degree of interoperability here as much as any technological aspect.

You are working on three new books. Could you please tell us something about these projects?

Certainly. The first book that is currently in a full draft state is called The Anxiety of Academia and it looks at the ways in which the concepts of critique, legitimation, and discipline are used by a set of contemporary novels to pre-anticipate the way in which academics will read such novels. The second is called The Aesthetics of Metadata and this project, which is about 50% complete, reads a series of contemporary novels for the way in which they represent metadata-like structures. So, for example, I here look at Mark Blacklock's book on the Yorkshire Ripper hoaxer in the UK and the way in which accents, writing, and location all play a role in the hunt. I also look at the false footnotes in Mark Z. Danielewski's House of Leaves alongside the objects from a ruined future in Emily St. Mandel's Station Eleven. Finally, the last book I'm working on for now is called Close-Reading with Computers, and this is also about 50% complete. This book is an exploration of the ways in which various methods from the field of computational stylometry can be used to advance the hermeneutic study of contemporary fiction, centring on David Mitchell's Cloud Atlas. I am attempting to publish all of these books through an open access route.