December 2015 - Cites & Insights

2 downloads 214 Views 920KB Size Report
Oct 28, 2015 - at their blog, occasioned by the apparent shutdown of Open Access Publishing ...... It's also cheap: the
Cites & Insights Crawford at Large

Libraries • Policy • Technology • Media Volume 15, Number 11: December 2015

Intersections

ISSN 1534-0937

Ethics and Access 2015

Let’s talk about ethics and access—primarily access to peer-reviewed articles, with a wide range of ethicsrelated subtopics. But first, recent history. During 2014, three ETHICS AND ACCESS essays appeared in Cites & Insights: ETHICS AND ACCESS 1: THE SAD CASE OF JEFFREY BEALL in the April 2014 issue; Ethics and Access 2: The So-Called Sting in the May 2014 issue; and ACCESS AND ETHICS 3 (OK, so I reversed the term) in the August 2014 issue. The first two had specific topics; the third was a roundup. This essay is, in some ways, a continuation of the first and third. Not that other essays in C&I in 2014 didn’t deal with ethics and access at least indirectly. I believe that’s true of essays in the July, September, October/November and December 2014 issues—the three big original research reports and an Elsevier commentary. I could say the same of essays in the January, March, June, July and October 2015 issues—and although the April 2015 issue was about the economics of OA, I don’t think you can neatly separate economics and ethics. (I’ve attempted to do so: the next big OA essay will probably be another THE ECONOMICS OF OPEN ACCESS in early 2016.) There’s a lot to discuss: I’m starting out with 114 tagged items in 13 categories, although I’m sure I won’t wind up with 114 cited items. (At the end of the first draft, it appears that I’ve included 99 items, probably a new record for a C&I roundup.) I’m also personally involved in a fair number of these items, perhaps more so than in the past. This essay is, deliberately, a little more lighthearted than some in the past—including some exclamation points in subheadings and the like. That isn’t to say there’s not serious business going on but this is a complicated and, frankly, fuzzy area, so maybe a light touch makes sense. (How complicated is it? Well, when you think of “predatory” journals, who are the predators? The publishers? The scholars Cites & Insights

Walt Crawford

who serve on their editorial boards? The agencies that create requirements such that ever more scholars need to publish in “international” journals? Are you sure? Read Cameron Neylon’s piece “Researcher as victim. Researcher as predator.” in the PREDATORS! section before you answer those questions.) In several of these items, supposedly “predatory” journals are named. Since I believe whitelists make more sense than blacklists, I’m sometimes adding a short note where feasible, most commonly saying “Not in DOAJ” or, less commonly, “In DOAJ.” Before we dive into the baker’s dozen of subtopics, here’s a thought experiment (with credit to Ross Mounce and others who’ve raised related issues in the past).

Blacklists: A Level Ground I don’t like blacklists. I’ve been pretty clear about that. I don’t think they’re the right way to go about things and I especially don’t think librarians and scholars should be supporting them. From the Index Librorum Prohibitorium to the McCarthy-era Hollywood blacklist, I think blacklisting publications and ideas is an unsavory business. But if we must blacklist journal publishers, and particularly blacklist all of a publisher’s journals for the sins of one or two articles or journals, then we should at least have a level ground: Publishers should be blacklisted no matter what their business model is. So, for example, here are a few practices that seem to be indicative of problematic publishing—or, in one case, regarded by Mr. Predatory himself as inherently corrupting scholarship:  Publishing fake journals to serve the interests of advertisers: There goes Elsevier.  Publishing journals devoted to pseudoscience: Unless you consider the supposed “memory of water” to have some bizarre

December 2015

1

grounding in quantum physics—which I don’t—then…well, there goes Elsevier. Again.  Publishing journals that feature a large number of articles by the editor: Whoops, that’s Elsevier again. (See later.)  Listing people on journal editorial boards without their knowledge or consent: Oh, never mind, too repetitive.  Charging for access to articles for which OA fees have been paid: Wiley and Springer, as well as…but we’re done there.  Republishing articles in more than one journal, sometimes without credit: There goes Emerald.  Publishing nonsense papers: There goes IEEE.  Overly broad coverage (which seems to be cited as a factor in some cases): Well, yes, there goes PLOS…along with Nature Publishing Group (what? “Nature” is a narrow topic?) and the AAAS (Science).  Charging author-side fees: Every publisher that has “hybrid” journals or publishes APCcharging OA journals. So, in addition to Elsevier, Wiley and Springer, there go Taylor & Francis, SAGE, Oxford University Press, Cambridge University Press the American Chemical Society, IOP, the American Institute of Physics, BMJ and RSC, at the very least. (That’s something like 10,200 “hybrid” and subscriptiononly journals, to say nothing of several thousand OA journals.) I’d guess author-side “page” charges would wipe out hundreds or thousands of society publishers and others. Those are just some examples. Of course, this partial blacklist now includes close to half of all subscription journals (probably much more than half, actually) including many or most of the largest ones. Leaving SciELO, Redalyc, a few thousand no-fee gold OA journals (most of them small), and probably a few thousand smaller subscription journals. Do I believe all these publishers belong on blacklists? Of course not. But I do believe in a level playing field. Otherwise, you’re saying “author-side charges corrupt scholarship…unless it’s a publisher that also charges for subscriptions,” or “pseudoscience is bad…unless it’s a publisher we like,” or whatever. Or you could give up on the inherently corrupt notion of blacklisting and help DOAJ build and maintain a whitelist—a database of journals that are likely to have ethical standards at least as high as those of the subscription publishers (and probably higher than some of the latter). Cites & Insights

Enough of that. On to the essay! With one caveat that applies to almost all essays that include multiple subtopics: While I thought I knew which subtopic an article belongs in, there are always overlaps and faulty distinctions. The logic of the sequence of subtopics is, of course, indisputable because it’s not stated. It may be obvious; it may not. It may be sensible; it may not.

Effects! A few items primarily about the harm done by—or at least the effects of—questionable journals and practices. More such possible effects turn up throughout the roundup here and there, but—as one of these items suggests—it’s rarely clear just how damaging the actual journals are, as opposed to mandates that have the effect of pushing scholars into using unknown journals.

Publications in pseudoscientific journals damage reputation of Kazakhstani scholars That’s the headline for an August 17, 2014 story by Dinara Urazova in Tengri News, a Kazakhstan publication. The gist: Scopus released a list of 13 “pseudoscientific” journals that had either been removed from Scopus or were never part of it. (One, the British Journal of Education and Science, was never in Scopus; the rest were removed.) Here’s the tough part: “13 journals that have been excluded from the Scopus database are the so-called “predatory” publications that actively offer our scientists to publish their works for money,” Deputy Director of the Research and Development Center of Almaty Management University Daniyar Sapargaliyev said. “In 2013, a third of all the articles of our scientists were published in such “predatory” publications. According to my estimates, about 50 percent of all the articles of Kazakhstani scientists published in foreign publications in 2014 were and will appear in such journals.”

There’s been a huge growth in Kazakh publishing in “international” journals in recent years because of a new rule: You can’t get a PhD without publishing articles in journals listed in Scopus or Web of Science. The Scopus total for 2011-2014 was 3,794 Kazakhstani articles—”more than 40 percent of all publications of Kazakhstanis abroad, beginning 1923.” But 30% of those were in three (other) journals removed from Scopus for “malpractice.” At least the departments involved appear to be doing this the right way: while future publication in publications no longer in Scopus wouldn’t count toward degrees, people would not (apparently) be penalized for past publication.

December 2015

2

None of the journals named is in DOAJ. There’s one comment on the article, and it’s one I find interesting. It begins “Why is a published paper a requirement for the awarding of a PhD in Kazakhstan?” and continues from there.

To some a citation is worth $3 per year This story, by Lior Pachter on October 31, 2014 at Bits of DNA, is complicated. It’s not about questionable journals as such; instead, it’s about questionable citations, affiliations and authors of papers. When the US News and World Report 2014 global ranking of universities appeared, with rankings by subject areas, #7 on the Mathematics list was King Abdulaziz University (KAU) in Jeddi, Saudi Arabia— behind such luminaries as Berkeley, Stanford, Princeton, UCLA, Oxford and Harvard, but ahead of Cambridge, with MIT not even in the top 10. I’ve been in the math department at Berkeley for 15 years, and during this entire time I’ve never (to my knowledge) met a person from their math department and I don’t recall seeing a job application from any of their graduates… I honestly had never heard of the university in any scientific context. I’ve heard plenty about KAUST (the King Abdullah University of Science and Technology ) during the past few years, especially because it is the first mixed-gender university campus in Saudi Arabia, is developing a robust research program based on serious faculty hires from overseas, and in a high profile move hired former Caltech president Jean-Lou Chameau to run the school. But KAU is not KAUST.

KAU has separate campuses for men and women and didn’t have a Math PhD program until 2012. Pachter found it astonishing that KAU could beat MIT as a math department. So he investigated. Although KAU’s full time faculty are not very highly cited, it has amassed a large adjunct faculty that helped them greatly in these categories. In fact, in “normalized citation impact” KAU’s math department is the top ranked in the world. This amazing statistic is due to the fact that KAU employs (as adjunct faculty) more than a quarter of the highly cited mathematicians at Thomson Reuters.

How is that possible? Apparently KAU solicits these professors, offering $72,000 a year for them to become adjunct faculty at KAU, with three one-week visits each year (business class air and five-star hotel included, along with all expenses)—and publish some papers with KAU affiliation, in addition to adding KAU affiliation to their ISI highly cited profiles. Beyond that…there’s a lot in the post. It’s clear that “citation frenzy” is part of the issue; it seems at Cites & Insights

least possible that there are ethical issues at play. I’ll quote the last serious paragraph in the post: The impact factor of a journal is a measure of the average amount of citation per article. It is computed by averaging the citations over all articles published during the preceding two years, and its advertisement by journals reflects a publishing business model where demand for the journal comes from the impact factor, profit from free peer reviewing, and sales from closed subscription based access. Everyone knows the peer review system is broken, but it’s difficult to break free of when incentives are aligned to maintain it. Moreover, it leads to perverse focus of academic departments on the journals their faculty are publishing in and the citations they accumulate. Rankings such as those by USNWR reflect the emphasis on citations that originates with the journals, as so one cannot fault USNWR for including it as a factor and weighting it highly in their rankings. Having said that, USNWR should have known better than to publish the KAU math rankings; in fact it appears their publication might be a bug. The math department rankings are the only rankings that appear for KAU. They have been ommitted entirely from the global overall ranking and other departmental rankings (I wonder if this is because USNWR knows about the adjunct faculty purchase). In any case, the citation frenzy feeds departments that in aggregate form universities. Universities such as King Abdulaziz, that may reach the point where they feel compelled to enter into the market of citations to increase their overall profile…

A long list of comments, including a detailed one from a professor who is KAU adjunct faculty and strongly defends the role. In this case, particularly because of the difficulty of deciding what’s going on, I strongly recommend reading the whole set of comments. (Excessive publishing in general, and excessive inclusion of “authors,” seem to be involved…but I don’t know enough to comment intelligently.)

What Happens When An Academic Publisher Stops Publishing? Neuroskeptic asks that question on March 11, 2015 at their blog, occasioned by the apparent shutdown of Open Access Publishing London. Neuroskeptic readers may remember OA Publishing London. It was founded in late 2012 after another publisher, BioMed Central (BMC), stopped publishing a journal called Head and Neck Oncology, alleging ‘major irregularities’ by the editors. A few weeks after that happened, OAPL was founded, the company being registered in the name of a relative of one of the editors of the axed journal. OAPL graciously adopted Head and Neck Oncology and also launched many other new journals too.

December 2015

3

OAPL became quite successful. They published an average of two papers per day for their first two years, by my estimate, based on the fact that they published roughly 1,500 articles over some 730 days. Many of their journal editors are established scientists. But now OAPL appears to have become moribund. Their website has added only a small number of papers in the past five months: by my count they have no papers dated to 2015, and only a handful dated November or December 2014, across their whole set of journals.

There could be several good answers to the question at the head of the post; here’s Neuroskeptic’s conclusion: To me, this case underlines the risks of new and untested publishers. The established publishing giants have their own set of problems, but at least they can be relied on to publish papers.

“Better the devil you know” is an easy answer to a lot of questions, I guess—but not one I find particularly satisfactory. As of this writing, OAPL shows two “active” journals, neither with 2015 articles, plus a whole fleet of journals “under development.” None of the journals are in DOAJ. I guess just ignoring all new publishers is one way of assuring…well, of assuring that there will be no improvement in the overall situation.

How much harm is done by predatory journals? Zen Faulkes asks this provocative question in an April 6, 2015 post at NeuroDojo. She uses the same CC license I use (albeit a newer version), so I could legitimately quote the whole thing—and it’s tempting. But I won’t. She begins: There is a cottage industry of people who feel the need to show, “There are journals that will publish crap!” And it’s getting tiring.

Faulkes notes a few examples, then continues: To listen to some of these, you could be forgiven for thinking that publishing a paper in one of these journals is practically academic misconduct: a careerending, unrecoverable event. I talk to a lot of working scientists, both online and in person. And in all of that time, how many scientists have I heard of who have reported someone who submitted to one of these journals, who were not satisfied with their experience? Three. One experience is described in two posts (here and here), and a couple of others were tweeted at me when I asked for examples. And two were “my friend” stories, not personal accounts. For the amount of handwringing over predatory publishers, this is a vanishingly small number.

The links are to an interesting discussion from a scientist who submitted a paper to a journal that showed up Cites & Insights

on Beall’s list; as far as the scientist was concerned, the peer reviews were appropriate, but the listing concerned them. At the conclusion of the first post, the scientist was inclined to allow the publication—but was then informed that one of the members of the editorial board was fictitious, so withdrew it. That’s the core of the second post—but thanks to Mike Taylor in comments on the second post, the scientist (Sarah Hird) learned that PLOS One provides full or partial waivers— and the paper wound up in PLOS One. [In DOAJ.] But back to the broader question: how much harm is done by questionable journals? Or, as Faulkes puts it: Let’s say that someone pays and publishes a paper in a predatory journal. Who is harmed, how much are they harmed, and what recourse is there to address the harm?

The author? An author who publishes in such a journal has paid the article processing charge. Okay, that sucks. But presumably the author knew she or he was going to be getting an invoice, and would not have gone that route if she or he was utterly unable to pay.

So how much damage does the author suffer if a paper is published without proper peer review? If the paper is competent, the author could [be] harmed because people will not read the paper because of the journal. But the paper is available for other researchers can use it and cite it if they so choose. People cite non-reviewed stuff all the time (conference abstracts, non journal articles).

But the author could try to retract the article or publish it elsewhere—or could not list the paper in their CV. In the end, “I don’t see severe harm done to an honest author who publishes in the wrong journal.” If not the author, maybe the public? Here, Faulkes has a lovely discussion that you should red in the original—for example, authors trying to spread bad information have more effective routes than “publishing in a crummy journal,” as demonstrated every day by climate deniers and a number of diet people. There are other segments, and I’ll quote this paragraph: The research was not done well. This is no different from research published in other journals. There are many, many cases of research that was poorly done, but published anyway. This is why post-publication peer review is important. This is why replication is important. Scientists perform post-publication peer review all the time. It is our job. This is what we do.

Nicely (and honestly) said. As for other scientists…Faulkes thinks papers in questionable journals will mostly be ignored. As for

December 2015

4

polluting the published scientific record, well, see the earlier quoted paragraph and this one: The major reasons that scientists get their panties in a bunch about predatory journals is not because junk “predatory” have done much demonstrable harm to anyone, other than authors who are out their processing fees. I see lots hand waving about the “purity and integrity of the scientific record,” which is never how it’s been. The scientific literature has always been messy. We always have [to] verify, replicate, and often correct published results.

Finally there’s the issue of exploiting scientists in developing nations: The problems for researchers in developing countries are not predatory journals. The problems that such researchers have is bad infrastructure, lack of support, and poor mentoring that prevents them from putting together papers that could be published in mainstream scientific journals. That they may be working under incentives that do not reward them for discriminating between journals. I also am waiting to hear from the waves of dissatisfied scientists from developing countries who feel they got ripped off.

That last may speak to the first item in this section as well: maybe the question shouldn’t be why PhD candidates are turning to questionable “international” journals—but why PhD candidates are required to have publications in international journals.

Keep out of bogus journals, SPPU tells researchers We’ll close this small section with this story by Sandip Kolhatkar on July 16, 2015 in the Pune Mirror. (Pune is a city of more than five million people in Maharashtra state, India.) The lead: Worry about the spawning of research journals, compromising on ethical vetting of submissions, has gripped the world of academia as never before. Studies show that questionable journals are being particularly accessed by authors from developing countries, including India.

While a government agency is working on ethical guidelines, Savitribai Phule Pune University (SPPU) has listed “at least 700 predatory journals and publishers” and warned its scholars not to publish in them. Unfortunately, the list is based on Beall’s lists. The article includes more details and suggestions on how proper journals should be identified—and that list includes “inclusion in the indexes of distinguished depositories such as PubMed, Web of Science, SSRN, Thomas Reuter and Google Scholar.” I don’t know how Thompson Reuters will feel about this, but I’m sure Google will be delighted that GS is now considered a “distinguished depository.” Cites & Insights

Predators! A few commentaries on who or what is actually a predator, including one Swiftian proposal based on an unsound starting point.

Predators without Prey? Wayne Bivens-Tatum posted this on November 21, 2014 at Academic Librarian. He begins with the story of “Get Me Off Your Fucking Mailing List,” a concise and clear “article” written to protest spam conference invitations and submitted as a response to spam invitations to publish in a journal. (The professor who submitted it was told it would be published for $150. He didn’t write it; he submitted a ten-year-old “paper” by two other authors. The full story is in Inside Higher Ed. The journal involved is not in DOAJ.) B-T notes a reaction from Jeffrey Beall including this sentence: “The open-access publishing model has some serious weaknesses, and predatory journals are poisoning all of scholarly communication.” B-T finds this amusing. In this case, we have a professor who expected the journal to be a scam, which basically it is. He sent them an article written many years before to protest spam conference invitations. These two taken together imply that spamming researchers predates the rise of the so-called predatory journals, and that researchers can tell when something is a scam. “Poisoning all of scholarly communication” is a ridiculous overstatement on the face of it, but describing an interaction in which everyone, including the professor, knows what’s really going on isn’t poisoning anything. It’s evidence that scholarly communication is working pretty well and that scholars know these journals are questionable from the beginning. What’s missing from the analysis of “predatory” journals is any evidence of widespread trickery, where researchers who don’t know any better are paying to publish in what they believe to be legitimate peer-reviewed scholarly journals. It’s hard to prove something is predatory if there’s not any prey. The professor in question is the exact opposite of prey, and if anything he’s preyed upon the journal by making it the butt of his joke, but I guess it’s easier to misinterpret evidence that challenges your beliefs instead of following the evidence to form your beliefs. Human, all to human.

That last sentence? You think that’s an accident or an error? Not the only commenter on the post (me): Good post–and yes, I see what you did there. As to the real issue: I don’t know how you would show actual predation. I strongly believe that many/most of the “predatory” journals that publish anything at all (most of them don’t exist as actual

December 2015

5

journals) are dealing with “prey” who know exactly what they’re getting into. But I can’t prove that.

I still believe that. I should correct my first sentence, though: it should be “I see what you did they’re.”

The Guard Dog. Who Keeps Watch for Fraudulent and Predatory Open Access Journals? This piece, by Hontas Farmer on December 23, 2014 at Quantum Gravity, is a little tricky because Farmer apparently didn’t proofread the post. The gist: Farmer published an article in International Journal of Astronomy and Astrophysics, a SCIRP journal that is in DOAJ. SCIRP is also on Beall’s list, which—given Beall’s “one worm in apple rots the whole orchard” approach—means he regards IJAA as a predatory journal. Farmer took him to task for that. Unfortunately, Farmer precedes that discussion by saying “Jeffrey Beall has done overall admirable work collecting a list of suspect publishers”—although he does follow that with this: The problem with his work is it covers such a broad swath of publishing topics and publishers. While many cases are cut and dried others are not. Many of the journals have one rather poorly designed website, no peer review, and offer a “discount” for publishing right away. They spam people etc. Some top journals do that too. I was once asked to submit something to the journal Classical and Quantum Gravity, as I recall, associated with the IOP in Britain. They did not publish it, but even top well respected journals do that sometimes.

Farmer’s message to Beall notes that IJAA was suggested to him by the editor of one of the “we only publish really important articles” journals, and notes that the biggest-name journals have notoriously published some “really bad articles.” For that matter, one of the top (subscription) journals famously rejected the original paper describing lasers. It would be more fair to call out individual bad or fishy articles. Then question those articles. Questioning a whole publisher and saying really caustic things about the journals, editors, and scholars who publish there, primarily because of the low cost* is at best inflammatory and prejudicial. Especially when some of those journals are acceptable references of record to the scholarly communities they wish to serve. I think astronomers know better than you what Astronomy journals are fraudulent. IJAA does not accept everything by a long shot. My paper was under review for a month, and was in revision for a year. To get a discount they did let me referee some papers and I personally rejected papers which have not since been published there. I’ve seen the inside and it’s Kosher. Cites & Insights

The asterisk: Farmer was considering publishing another paper in an OA journal from Elsevier—but $2,500 is just too much money. Farmer continues with a proposal for peer-reviewing journals themselves, based on reviewing randomly selected articles. I won’t comment on that suggestion.

New Predatory Publishing in Old Bottles Barbara Fister on March 9, 2015 at “Library Babel Fish” in Inside Higher Ed, and as always with Fister, my first suggestion is to read her column directly. Fister’s discussing a different kind of predatory publishing: What worries me far more than these fairly obvious scams are the emerging business practices being used by highly profitable publishers with long and distinguished pedigrees that are treating open access as a new revenue stream that can be both open and closed – earning money through subscriptions and author fees. (Monica Berger and and Jill Cirasella have just published an excellent article on why we need a broader understanding of predatory publishing practices.) But double-dipping isn’t enough. Witness a blogger’s report that Elsevier (founded in 1880) is selling articles published by Wiley (founded in 1807) as open access articles. Elsevier’s PR team responded quickly by removing the article from its for-sale options. Apparently, though, the company continues to sell open access articles originally published by Wiley under the same terms.

There’s an explanation of how this is possible and some tricky license clauses, with Fister’s judgment: “Though it is an unethical practice on its face, it may be legal.” If the articles used CC BY, it would certainly be legal—but, if there’s no value added, it would at least be ethically shaky. Or, in other words, predatory. The comments are interesting—including one from Elsevier’s Dr. Alicia Wise, noting that all 27 articles involved have now been opened for free access. “We continually adapt our systems and procedures to strive for zero defects. If for any reason a problem occurs we are committed to deal with it in a fair and efficient manner.” Another commenter has a comment about that last sentence and his own experience. There’s also an interesting discussion regarding the ethics of legally selling free stuff without significant added value. Joseph Esposito commented on Fister’s piece in “Getting Beyond ‘Post and Forget’ Open Access” on March 17, 2015 at the scholarly kitchen. He doesn’t think there’s a moral issue (I’m not sure I’d equate

December 2015

6

ethical and moral, but…) and says “I think the commercialization of OA material is in the interest of everybody.” How so? Esposito goes into an extended example that…well, here’s just part of it: Here is where my contrarian instincts come to the fore. In my view, everyone should want Publisher B to market the material, not just the greedy shareholders of the company. This is because that which is unsustainable cannot be sustained, and that which is sustainable is sustained, by definition. If the OA version of the material remains available and is well distributed, then no one is going to pay for the tollaccess version. That means that whatever dark motives Publisher B may have had, if there is no market for a paywalled version of the articles, Publisher B will cease its practice. The attempt to monetize Publisher A’s OA material by Publisher B will simply disappear, written off as an unsuccessful experiment.

Um. Later, I get the sense that Esposito’s talking about value-added republishing (when he discusses making money by publishing public domain materials) or marketing or both—and at that point he founders, I think, because he states as a given that true OA publishers don’t publicize (“market”) their publications: The economic incentive to reach new audiences could make that otherwise OA article into something that gets brought to the attention of more and more readers. What incentive does a pure-play OA publisher have to market the materials it publishes? Unfortunately, the real name of this game is not “Open Access” but “Post and Forget.” Well-designed commerce, in other words, leads to enhanced discovery. And when it doesn’t, it enters the archaeological record.

I’m sure PLOS, for one, will be mightily surprised to hear that it doesn’t spend money on publicity or marketing. So would Co-Action Press (as noted in a comment). But Esposito’s satisfied himself that OA publishers just let the articles molder. And boy, is he dismissive of anybody who thinks about anything other than the mighty dollar: If we can chase the idealists, ideologues, and moralists out of the temple, we may see that the practical act of providing economic incentives may be able to do more for access than any resolution from Budapest, Bayonne, Bethesda, or Berlin.

First thing we do, we get rid of all the idealists, right?

Open-access publisher sacks 31 editors amid fierce row over independence Can an OA publisher act in a predatory manner toward its own journals? Consider this article by Martin Enserink on May 20, 2015 at ScienceInsider. Cites & Insights

The heart of it: 31 editors at two of Frontiers’ OA journals “complained that company staff were interfering with editorial decisions and violating core principles of medical publishing.” So Frontier dismissed the editors. Emotions are running high. The editors say Frontiers’ publication practices are designed to maximize the company’s profits, not the quality of papers, and that this could harm patients. Frederick Fenter, executive editor at Frontiers, says the company had no choice but to fire the entire group because they were holding up the publication of papers until their demands were met, which he likens to “extortion.”

Both journals are in DOAJ, publishing 50 and 12 articles respectively in 2014, their first year. Both charge $1,900 APC. I graded both “BQ” for questionable statements on the journal home pages. What’s this all about? One key issue, the manifesto says, is the power of socalled associate editors, of which each journal has about 150. These are academics who handle the review process and can accept a manuscript—after it has passed muster with two review editors—without any involvement from the editors-in-chief or field editors. (Authors can pick their “preferred” associate editor themselves.) The critics call this process “totally unacceptable” because it sidesteps the editorsin-chief, and a violation of internationally accepted standards. The World Association of Medical Editors (WAME), for instance, says that “Editors-in-chief should have full authority over the editorial content” of their journal. Jos van der Meer, a former editor-in-chief of Frontiers in Medicine and chief editor of its Infectious Diseases section, says he was sometimes notified about the acceptance of papers that he didn’t approve of, or that he felt were handled by the wrong associate editor. (On the other hand, when a paper was rejected, Frontiers would ask him if it was the right decision, he says.) “I realized I had very little to say,” Van der Meer says. “I felt like a puppet on a string.”

Apparently staff would also move manuscripts to different editors, invite authors to write commentaries without the awareness of the editors, and override editorial decisions. Naturally, Frontiers denies that it’s done anything wrong. (Nature Publishing Group at one point owned most of Frontiers; Springer Nature or, rather, Holtzbrinck now apparently owns less than half of Frontiers.) The comments are interesting. By the way, Frontiers is not on Beall’s list. (Well, it wasn’t when I wrote this: since then, it’s been added. See later.)

December 2015

7

Open for Business Barbara Fister again, this time on August 10, 2015 at “Library Babel Fish” in Inside Higher Ed. I was startled when a physicist said to me the other day “this open access thing is a scam.”

The physicist wasn’t talking about the lists: No, he was talking about open access journals being launched by Nature and other leading science publishers. “It’s like they’ll publish stuff that gets rejected so long as you pay.” We got interrupted and I never got to the bottom of it, though I did have a chance to ask his opinion of hybrid journals—the ones that charge a subscription but will make individual articles free if you pay a fee which is often in the thousands of dollars. His response? “That’s ridiculous!” Yeah, I think so, too. From his perspective, the open access scam wasn’t shoddy “predatory” publishers (a concept that really needs to be defined more broadly than just the sites on Beall’s list). It was the fact that respected journals like Science will funnel articles rejected from its highly-prestigious pages to an online mega-journal site where they can be published without further review so long as the author can spare the cash. Put it that way, it does kind of seem like a scam.

The last link above leads to a ScienceInsider piece announcing Science Advances. (If you’re wondering, yes, it’s in DOAJ, but not in my study because it shows a 2015 launch date—and a $3,000 APC.) Fister has more to say about this and the whole access situation; read her column.

Predatory Publishing: A Modest Proposal Richard Poynder on September 8, 2015 at Open and Shut?—and I find myself approaching this one with considerably mixed feelings. The proposal: Why does the OA movement not create a database containing all the names of researchers who sit on the editorial and/or advisory boards of the publishers on Beall’s list, along with the names of the journals with which they are associated? Such a database could perhaps serve a number of purposes…

This presumes two things right off the bat:  That “the OA movement” exists, and that it is a coherent entity capable of creating and maintaining such a database  That it’s appropriate to enshrine Beall’s lists as the authoritative source of “predatory” journals. The first is, in my opinion, nonsense. The second is worse: it provides further credibility to junk lists, and by its very existence increases their apparent validity. It would also, to be sure, be a massive and expensive effort, both initially and ongoing. I’m aware of what it took me to gather a relatively small number of items Cites & Insights

about each journal in the lists, a number small enough to fit in a spreadsheet. Gathering and recording data on each editorial and advisory board member would require many times the effort and a true database. The rest of the fairly long post and longer set of comments, with Poynder engaged in the commenting process, is interesting and revealing. I note that Poynder does not scare-quote “predatory publishing” and seems fully on board with the notion that only OA publishers can be predatory; that he does scarequote the pretty well proven notion (given some of Beall’s articles and statements) that Beall is anti-OA; that he seems to be more critical of DOAJ’s clearminded steps to improve its criteria than of Beall’s stuff; that he blasts APCs without ever noting that author-side charges are rampant among subscription journals (I guess calling them “page charges” makes them OK); and apparently feels that those of us who regard Beall as worth ignoring have a “closed mind.” Poynder uses “OA movement” a lot! Within the comments, the interchange between Mike Nason (who offers a clear and I think worthy critique) and Richard Poynder is interesting, as are several of the other comments and responses. You may want to read the post; if so, read the comments as well. Of course, I am reminded that “A Modest Proposal“ has historical ties to satiric overstatement, and the more I think about the efforts required and the likely outcomes, the more I suspect that Poynder’s purpose really is Swiftian…

Researcher as victim. Researcher as predator. Cameron Neylon posted this on September 7, 2015 at Science in the Open—and the blog is CC0 (that is, placed in the public domain), so I can offer you access for a mere $30 for 24 hours incorporate the whole post here. The more I look at what Neylon’s saying and how he’s saying it, that seems like the right thing to do: Researchers for the most part are pretty smart people. At the very least they’ve managed to play the games required of undergraduate and post graduate students, and out-competed a substantial proportion of other vying for the same places. Senior academics have survived running the gauntlet of getting published, and getting funded, at least enough to stay in the race. It has been observed that when smart people do dumb things it is worth looking closer. The dumb thing is usually being done for a smart reason. Indeed we might go one step further and suggest that where a system is populated largely by smart people the proportion of dumb things they are doing could be a good diagnostic of how good the system is at

December 2015

8

doing what it is meant to do, as opposed to what it is measured to do. On this basis we might wonder about the health of many universities. We also know there are a few issues with our systems of scholarly communications. And some of these involve unscrupulous and deceitful players out to reap a commercial gain. The story of so called “predatory” journals is a familiar one, which usually focusses on “publishers” claiming to offer open access services, but more recently an article in the Guardian took a similar stance on traditional academic monograph publishers. In both cases the researcher is presented as a hapless victim, “hoodwinked” as the headline states into parting with money (either directly in the form of APCs or indirectly through their libraries). But really? I’ve no intent to excuse the behaviour of these publishers, but they are simply serving a demand. A demand created by researchers under immense pressure to demonstrate their productivity. Researchers who know how to play the game. What is a line on a CV worth? Does it make that grant a little more likely? Does it get you past the magic threshold to get on the applicant short list? Is there a shortcut? Researchers are experts at behaviour optimisation and seeing how systems work. I simply don’t buy the “hapless victim” stance and a lot of the hand wringing is disingenuous at best. On a harsh economic analysis this is perfectly rational behaviour. Smart people doing dumb things for smart reasons. The expansion of journal lists, the increasing costs to libraries, and the ever expanding list of journals that would take just about anything were never perceived as a problem by researchers when they didn’t see the bills. Suddenly as the business model shifts and the researcher sees the costs the arms are going up. The ever dropping circulation (and ever rising prices) of monographs was never really seen as a problem until the library budgets for monographs started to disappear as the serials crisis started to bight. The symptoms aren’t restricted to dodgy publishing practices of course. Peer review cartels and fake reviewers result from the same impulse, the need to get more stuff published. Paper mills, fake journals, secondary imprints that will take any book proposal, predatory OA and bottom feeding subscription journals are all expressions of the same set of problems. And the terrifying thing is that responsible publishers are doing a pretty good job of catching a lot of it. The scale of the problem is much, much greater than is obvious from the handful of scandals and a few tens of retractions. At times it is tempting to suggest that it is not publishers that are predatory, but researchers. But of course the truth is that we are all complicit, from publishers and authors producing content that no-one Cites & Insights

reads, through to administrators counting things that they know don’t matter, and funders and governments pointing to productivity, not to mention secondary publishers increasing the scope of they indices knowing that this leads to ever increasing inflation of the metrics that makes the whole system go round. We are all complicit. Everyone is playing the game, but that doesn’t mean that all the players have the same freedom to change it. Commercial suppliers are only responding to demand. Governments and funders can only respond to the quality assessments of the research community. It is only the research community itself that can change the rules. And only a subset of that. Emerging researchers don’t have the power to buck the system. It is senior researchers, and in particular those who mediate the interface between the sources of funding and the community, the institutional leaders, Vice-Chancellors, Presidents, Deans and Heads of Department. If institutional leaders chose to change the game, the world would shift tomorrow. Scott Edmunds perhaps summed it up best at the FORCE2015 meeting in Oxford: It is no longer the case that people are gaming the system, the system has become a game. It’s time to say Game Over. If we cast ourselves as mere victims we’ll never change the rules. The whole narrative is an excuse for doing nothing.

I reprint this because I believe it’s well-said and important; Neylon’s getting at some uncomfortable truths. There are a handful of comments on the original that you might also want to read, along with a rather nice illustration.

Predatory Priorities Björn Brembs posted this on October 23, 2015 on his eponymous blog. It’s partly about Beall’s lists and the questionable practice of damning an entire publisher for one bad journal—but he thinks there are more important “predatory” priorities in any case. Specifically: 2a. There is a group of publishers which publish the least reliable science. These publishers claim to perform a superior form of peer review (e.g. by denigrating other forms of peer-review as “peer-review light“), but in fact most of the submitted articles are never seen by peers (but instead by the professional editors of these journals). For this minority of articles that are indeed peer-reviewed, acceptance rate is about 40%. Sometimes this practice keeps other scientists unnecessarily busy, such as in replicability projects or #arseniclife. Sometimes this practice has deleterious effects on society, such as the recent LaCour or Stapel cases. Sometimes this practice leads indirectly to human death, such as in irreproducible cancer research. Sometimes this practice

December 2015

9

leads directly to human death, such as in the MMR/autism case. These publishers charge the taxpayer on average US$5000 per article and try to prevent the taxpayer from checking the article for potential errors.

These publishers are not open access: The $5,000 per article average is based on dividing total scholarly journals expenditures by (non-OA?) article totals. He contrasts that with Group 2b: 2b. There is a group of publishers which similarly claim to perform peer-review but in fact do not perform any peer-review at all. Apparently, it seems as if they aren’t even performing much editorial review. The acceptance rate in these journals is commonly a little more than twice as high as in the journals from 2a, i.e. ~100%. Other than the duped authors, to my knowledge there are no other harmed parties, but I may have missed them. These publishers charge the taxpayer on average ~US$300 per article and do allow the taxpayer to check the articles for potential errors.

Brembs believes dealing with 2a is more important than dealing with 2b, given “the larger harm…inflict[ed] on the society at large as well as the scientific community.”

Stings! I devoted the first 20 pages of the May 2014 Cites & Insights to ETHICS AND ACCESS 2: THE SO-CALLED STING—that is, the Bohannon hoax and what it did or didn’t uncover. I’m proud of that essay, and will suggest that you go back and reread it. I’m especially proud of pages 16-20, where I submitted the 315 OA journals involved to a brief “sniff test.” Stings make exciting copy, so it should be no surprise that Bohannon’s—while perhaps the broadest— wasn’t the last. For those who remember the movie (is Scott Joplin’s The Entertainer an earworm at this point?), you might remember that the whole thing was a confidence game: maybe for “good purposes,” but a con nonetheless. (Note that all of these stings follow the Bohannon Cherrypicking Approach: they only aim at OA journals, never offering the papers to an equal number of, or any, subscription journals.)

The link doesn’t work: hey, it’s 18 months old and, you know, newspapers. Unfortunately, as soon as the discussion gets going, the phrase “online-only, for-profit operations” links to You Know Who—and Stromberg says flatly that these journals “don’t conduct peer-review.” (That generalization is pretty clearly false.) So there it is: a quickly-done paste-up sent to 18 journals, eight of which said they’d publish it for anywhere from $1,000 to $5,000 (the latter an astonishingly high fee for any journal, much less a questionable one). The Vox article includes a link to the three-page paper (damned if I could find any mention of Mars, but given the amount of jargon scattered through the paper, it may be there) as it stands on one of the journals. (The journals named in the article are not in DOAJ.) The reporter’s done his own odd “sting”—accepting the offer from a “predatory” book publisher to publish his undergrad thesis for no fee…and then added an irrelevant sentence toward the end of the thesis, “highlighting the fact that they publish without proofreading or editing.” If you’re puzzling over how publishing for no fee can be predatory, the explanation is that the author has now given the publisher rights to sell the book “for exorbitant prices online.” (The “exorbitant” price: $68 for what’s assumed to be a scholarly monograph with almost no real market.) Gotta admit, if a publisher said they would publish my accepted thesis, I would assume that they would publish it as it stands, without editing or proofreading—so I’m not sure what to think of Stromberg’s so-called sting. The article does not get a lot better. Take these two paragraphs: The existence of these dubious publishers can be traced to the early 2000’s, when the first open-access online journals were founded. Instead of printing each issue and making money by selling subscriptions to libraries, these journals were given out for free online, and supported themselves largely through fees paid by the actual researchers submitting work to be published.

Take this one, as reported by Joseph Stromberg on April 24, 2014 at Vox.

The first of these journals were and are legitimate— PLOS ONE, for instance, rejected Bohannon’s lichen paper because it failed peer review. But these were soon followed by predatory publishers—largely based abroad—that basically pose as legitimate journals so researchers will pay their processing fees.

A reporter for the Ottawa Citizen wrote a plagiarized, completely incoherent paper about soils, cancer treatment, and Mars. And eight scientific journals want to publish it.

The first OA journals were published in the 1980s, not the “early 2000’s”; and it is simply not true that all or even most OA journals support themselves through author-side fees, much less fees paid by “the

A reporter published a fake study to expose how terrible some scientific journals are

Cites & Insights

December 2015

10

actual researchers.” And, of course, PLOS One is so far from being the first OA journal that it’s not even funny. “Largely based abroad” tosses in a little xenophobia to sweeten the pot. Stromberg’s concoction ends with this: Scientists view this industry as a problem for a few reasons: it reduces trust in science, allows unqualified researchers to build their resumes with fake or unreliable work, and makes research for legitimate scientists more difficult, as they’re forced to wade through dozens of worthless papers to find useful ones.

The link for this alarming list of consequences? Bohannon’s article. Pfeh. I get that Vox is apparently a linkbait site, but nonetheless pfeh.

Bad Science Journals This article by Steven Novella on August 27, 2014 at Science-Based Medicine begins oddly: It’s an excellent business model. The only real infrastructure you need is a website, and you can have a custom site made for $5-10 thousand. Then you just have the monthly bandwidth charges. The rest is just e-marketing, which can be done for free, or the cost of some e-mails lists. After that, the money just comes rolling in. The best part is that other people do all the actual work. All you have to do is charge them for publishing on your open-access online journal.

Except that the money generally does not come rolling in, at least not at levels sufficient to justify spending $5 to $10K for a website. When I looked into it, even assuming all journals in Beall’s lists are actually scams, which I absolutely do not, fewer than 400 could have taken in $25,000 or more in their best year, and not many more than 600 could have taken in $15K or more. After this, we get the usual encomium for Beall, coupled with the seeming assumption that none of the journals in his lists do peer review at all—an assumption that has been disproven many times, based on comments from authors whose papers were reviewed and in many cases rejected. The twist to this article is a journal that supposedly turned from a respectable publication into a scam after sale to a new publisher and is taking it in hand over fist by publishing worthless articles. This case may be egregious; as usual, the journal in question is not in DOAJ. Novella’s intent on showing the harm: All of this creates more work for scientists and academics who have better things to do. Scientists have to spend time vetting journals before submitting Cites & Insights

their work. Of course, everyone would like to publish in Science or Nature, but most papers are published in second- or third-tier journals which are legitimate but maybe not as well known. The predatory journals are hiding among the herd of such legitimate but more obscure journals.

Scientists actually have to think about where they’re submitting work. The horror, the horror! Further, younger and perhaps less experienced academics are approached to be on editorial boards of new open-access journals, or are encouraged to submit their work there. Meanwhile more experienced academics with name recognition might have their name simply added to an editorial board without their consent. Promotional committees also have to spend additional time vetting every publication in a journal they don’t immediately recognize.

All of which presumes that everything named by Beall is, in fact, worthless and predatory—which Novella pretty clearly does. (I like the complaint that promotion & tenure committees might have to do more than weigh the stack of published papers.) As dubious open-access journals proliferate, it becomes more difficult to keep track, and the published literature itself becomes overwhelmed with poor-quality papers. This generates further confusion for the media, who have a tough enough time reporting science news. Essentially this phenomenon is ramping up the noise and drowning out the signal of quality scientific research.

Overwhelmed. Strong language, that. Best guess: articles in Beall’s-list journals, some but certainly not all of which might be dubious, may have accounted for 135,000 or so out of 1.4 to 2.5 million (or so) scholarly articles. So, where the media could otherwise read through a mere 1.3. to 2.4 million articles (or so) and see what’s worth reporting on, now they’re overwhelmed by an extra 135,000 in journals they’ll probably never see. Something is dubious here, to be sure. Here’s an interesting passage: There are, of course, solutions. Beall’s list of predatory journals is a good start. In addition to this “black list,” a thorough and rigorous “white list,” or seal of approval would also be very helpful. This is supposed to exist, but clearly too many dubious journals are slipping through.

Novell does not mention DOAJ—and seems to think some agency (who? with what power given the First Amendment?) should be there to assure that “dubious journals” don’t “slip through.” Did I mention that the journal in question is not in DOAJ? The rest of the discussion seems to make it clear that, while the author says he thinks OA is a viable

December 2015

11

model, he’s more concerned that poor-quality articles are appearing—presumably only in OA journals, since there’s no mention of questionable subscription journals and articles. An article on arsenic-based life could, of course, only appear in some scammy OA journal. Right? The comments are…bizarre, partly because of a CAPSLOVING troll.

Predatory Journals: An Experiment We’ll close this section with this, by Dr. Fiona McQuarrie on January 21, 2015 at All About Work. McQuarrie viewed with alarm all those predatory journals (there is, of course, the mandatory praise for Beall) and noted that she gets at least three emails a week soliciting manuscripts. (Heck, I get one or two a week—mostly from IJAIIR, whoever that is—and I’m not a published scientist at all.) She also notes two of the stings, the “Get me off…” paper and another Joseph Stromberg/Vox exposé. (Stromberg’s articles heavily self-plagiarize, but when you’re making a mini-career…). So…she did a small experiment. She took the two nonsense papers, changed the names of the authors and their institutional affiliation, and sent them to three journals (or journals from three publishers) from which she’d received a lot of scam email. One journal almost immediately sent back a message asking for confirmation that she’d pay the $500 fee, and kept sending new emails. The second sent back immediate email saying the articles would be typeset (really? typeset?) and reviewed in 15 days, which McQuarrie calls “a ludicrously short time for a manuscript review.” One was accepted after 10 days with some odd review comments—although, actually, one reviewer properly said the article was out of scope. The third one is interesting because the email gave the name of a specific individual as editor-inchief—an individual who, when contacted, said they’d never been involved with the journal. The journal took three days to accept one article for a $160 processing fee. None of the journals accepted “Get me off…” Here’s the closing paragraph: After this experience, I can only urge researchers to be very careful when choosing where to publish their work. The old adage of “if it seems too good to be true, it probably isn’t” is very relevant in this situation. Google the name of the journal to see if there are online criticisms of the journal’s business practices. Look at how the journal communicates; legitimate and respectable journals don’t use Gmail or Hotmail Cites & Insights

email addresses. Look at the websites of the wellknown journals in your field, and look at the information those websites usually contain; if a journal’s website doesn’t have that same type of information (e.g. the names of the editor and the members of the editorial board), be careful. Find out if the wellknown journals in your field charge a fee for manuscript review or preparation; this is the norm in some academic disciplines, but if it isn’t common in yours, be very wary of journals that charge a fee for publication. Researchers being cautious about where they publish their work, and refusing to submit manuscripts to questionable publishers, is the only way to put predatory journals out of business.

I have two problems with that final paragraph: One of omission and one of commission. The omission: She should really start her advice by saying “See whether the journal is in DOAJ.” None of those involved in this sting are. The one of commission: “legitimate and respectable journals don’t use Gmail or Hotmail email addresses.” Yes, I know that’s one of Beall’s flashpoints, but it’s just plain wrong, at least for Gmail. There is no legitimate reason for a small OA journal (two-thirds of the serious OA journals in DOAJ published fewer than 35 articles a year in 20112014) to establish its own email system. What’s wrong with using a Gmail account? That’s as spurious a “predatory” criterion as “I don’t think Joe Smith is his real name” or “I suspect this journal is published in India.” Otherwise, the final paragraph’s OK, and McQuarrie has established that three little journals, none of them in DOAJ, have editors who apparently aren’t Simpsons fans—and don’t do very good review. Come to think of it, wouldn’t most scholars recognize getting lots of unsolicited email requesting manuscripts, especially outside their field, as at least a yellow flag if not a red one?

Scams! Six somewhat scammy situations, and you may be shocked by #6. Or not. (Trying to get into that linkbait listicle mindframe; it’s harder than it seems.)

Serbian journal lands in hot water after challenge on 24 hour peer review that cost 1785 euros The story (by “micotatalovic”) appeared on July 7, 2014 at Retraction Watch. This time it is a journal that is in DOAJ: Archives of Biological Sciences. The journal, Archives of Biological Sciences (ABS) is the official publication of the Serbian Biological Society, copublished by ten organisations in Serbia and Bosnia. It was accused (on June 12) on the Scholarly Open Access

December 2015

12

blog of accepting a paper in 24 hours with no peer review, and demanding 1785 euros for publishing it. After receiving a quick acceptance letter for his plant sciences paper, Jaime A. Teixeira da Silva, an outspoken critic of what he considers corruption in scientific publishing, said he was “extremely concerned” and demanded “a full explanation of the predatory publication charges” in an e-mail to the editor. The apparent lack of peer review and request for a publishing fee when “when DOAJ indicates that there are no publishing costs” lead him to request explanation before revealing “this serious academic fraud.”

Looking at DOAJ’s record for this journal currently, it says “Information on publication charges not yet available for this journal,” and the journal’s own website currently includes this statement—in boldface within the instructions for authors: The Archives of Biological Sciences does not charge manuscript processing and/or publishing.

The journal publishes around 200 articles a year. So what’s going on here? Consider this: The journal’s editor, Božidar Curč ić from University of Belgrade’s Faculty of Biology, initially defended their procedures in an e-mail to me, as I reported on Balkan Science Beat blog, claiming the peer review had been done by two anonymous reviewers plus the editor, that it took 36 hours rather than 24 hours, and that the money was for ‘support’, not publishing costs. He also claimed that allegations in the blog about as an extremely high level of self-citation do not stand: “the self-citation is within the limit of normal procedure.”

The money part makes no sense. A Serbian agency investigated the journal and found other problems: manipulative citation behavior, plagiarism, lots of articles from the editor and his children—in all, bad behavior. Ten days after this piece appeared, the editor in chief resigned, as did the entire editorial board. They were replaced. I’m delighted that Retraction Watch items mention related posts—even ones that appear after the original post. Thus, we’re led to “Serbian journal cleans house with 16 retractions and 2 corrections after investigation,” on August 6, 2016. The title provides the key information (it’s a long post, including all the notices of retractions and corrections), but here are noteworthy comments from the new editors: In the summer of 2014, the Archives of Biological Sciences was singled out as a scientific journal that had veered away from the ethical publishing practice of scientific journals and was placed on the list of predatory journals. Members of the scientific public in Serbia directly affected by the accusations were mobilised, and after thorough investigation it was concluded that Cites & Insights

many of these accusations were founded. As a result, the Serbian Biological Society and other co-publishers of the journal replaced the entire editorial team. The new Editorial Board was confronted with the difficult task of resolving many outstanding issues which primarily affected contributors and scientific research in life sciences, as well as official bodies that support scientific publishing in Serbia. We are issuing this announcement in order to express our deep regret to the scientific community for past mismanagement of scientific information. The new editorial team will continue to correct all mistakes while striving to ensure accurate, timely, fair and ethical publication of scientific papers that the Archives of Biological Sciences has been traditionally known for. We are calling all readers of the journal to directly contact the editorial office and the editors of the journal regarding any case of publishing malpractice in the future, to ensure prompt remediation. The new Editorial Board adheres to the Best Practice Guidelines in order to improve the overall quality of the Archives of Biological Sciences and in this way hopes to regain the trust of previous and future contributors.

Are there examples of subscription journals that editors have used in scammy ways? Of course there are. Had this journal become scammy? Apparently so. Has it been redeemed? The new editors are certainly making the right moves.

The Strange Rise and Fall of a Medical Journal By Neuroskeptic on August 28, 2014 at Neuroskeptic—and it’s a prequel of sorts, leading up to “What Happens When an Academic Publisher Stops Publishing?” in the EFFECTS! section earlier. Neuroskeptic links to a 15-page paper by Dr. Waseem Jerjes, editor of Head and Neck Oncology, “defending his own behavior as editor, and refuting charges of misconduct against him.” In July 2012, BioMed Central—which had published Head and Neck Ontology (HNO)—asked Jerjes and other editors-inchief to step down after finding “serious editorial misconduct,” then stopped publishing the journal. Then Open Access Publishing London (UK) Ltd. started publishing HNO with the same editorial board. The director of this publisher appears to be a relative of Jerjes. There’s another, similarly-named publisher at the same address, also directed by a Jerjes. (There’s more: read the post.) Here’s what BMC had to say: Amongst other charges, the main claim is that Jerjes was accepting papers on his editorial judgement alone, without sending them out for peer review (except that provided by himself). BMC also allege that Jerjes frequently submitted his own work to HNO,

December 2015

13

and that in many cases, such articles were apparently handled by Waseem Jerjes, an author of the article and Editor-in-Chief of the journal at that time. The manuscript was reviewed by one recent co-author of some of the authors and accepted without revision.

Attempts to get to Jerjes’ response yield 403 “Forbidden” messages. Again, there’s more to the post, and the situation does seem questionable. The publisher(s) seem(s) to have disappeared (see earlier item), and the journal isn’t in DOAJ. Full credit to Neuroskepti for the last paragraph of the item (before a postscript): Whether or not OAPL was founded as a means to keep HNO alive, it quickly expanded beyond that remit, and now publishes over 50 open access journals. Authors can be charged up to £750 to publish in an OAPL journal. I believe that anyone considering doing so ought to think carefully before entrusting their money, and their manuscript, to an organization with a history of this nature. There are many quality open access publishers.

The first link yields yet another Forbidden error. The second link—for “many quality open access publishers”—is a link to doaj.org. Nicely done.

iMed Publishing fell for Bohannon’s chocolate hoax, but that’s not the worst thing about them This one’s by Matt Hodgkinson on August 12, 2015 at Journalology, and it’s fair to say that Hodgkinson is a supporter of OA: he was an editor at BioMed Central and is now an associate editor at PLOS One. Bohannon got so much mileage from his earlier sting that he did it again, but somewhat more subtly: This time, with two German documentary makers, he’d had an even more audacious plan: to hoodwink lazy journalists—churnalists—by running a real clinical trial, fiddling the stats, and publishing it in an apparently peer-reviewed journal. As ‘Johannes Bohannon’ of the ‘Institute of Diet and Health‘ in Mainz he submitted his study to 20 journals; of those that took the bait he selected the International Archives of Medicine, who had agreed to publish without peer review, and he then press released it.

The post discusses what happened after Bohannon publicized his second sting. One focus—not unreasonably—was whether Bohannon’s stings are themselves unethical. (The link is to a long post by Hilda Bastian, another OA editor, and it’s worth reading; in my opinion, she makes a strong case that this sting was unethical. There’s also the issue that revealing the study he paid for as being a sting muddies the water, since there are any number of legitimate studies suggesting that dark chocolate does have health benefits.) Cites & Insights

Back to the story itself. The publisher of the journal, iMed.pub, tried some rather desperate damage control; iMed.pub told me on Twitter “That article has never been published” and said in a disclaimer on their website that the paper “accidentally appeared online for some days. Indeed that manuscript was finally rejected and never published as such.” They accused Bohannon of lying: “what the pseudo-author was claiming is false”. The article was available until 27 May; is two months “some days”? Nobody was convinced and the blog Retraction Watch took this claim to task. Few asked: “Who are the International Archives of Medicine and iMed Publishing, anyway?”, but I was intrigued as I had heard of iMed.pub and their journals before — what was going on?

Oh look: Internal Archives of Medicine is another journal that used to be published by BioMed Central— but in this case, it was sold to iMed.pub. iMed Publishing (Internet Medical Publishing or iMed.pub) would like to be known as “the fastest growing publishing house on the net“, an open access publisher based in London, UK, publishing seven journals and affiliated to the Fundación de Neurociencias and the Internet Medical Society. iMedPub Limited is a company registered in the United Kingdom (company number 08776635, though 2,215 companies also use the same address as iMed.pub, likely a mail forwarding service). iMed.pub and their associated organizations seem to trace back to two Spanish clinical researchers: CEO Carlos Vázquez and the director of iMed.pub and the Editor-in-Chief of IAM, Manuel Menendez-Gonzalez.

There’s quite a bit more about the company (or companies) and its problems in the post. Among other things, the Internal Archives of Medicine published 48 articles by Luiz Carlos de Abreu in the first eight months of 2015, the publisher seems to copy its stuff wholesale from other OA publishers…and lots more. It’s a long post, but do read it all, including the section “iMed Publishing were not just Bohannon’s dupes.” Scammy? Probably. In DOAJ? No. Oh…and notice who’s going to some lengths to publicize the scam: an OA editor.

For open access. Against deception. This piece by Ravi Murugesan appeared on September 23, 2014 at Medium, which seems to be a sort of multiauthor blog/platform. He starts by linking to a Beall post, and it’s pretty clear that Murugesan’s a big believer in Beall’s blacklists. He also says he’s met researchers in developing nations who have published papers in blacklisted journals: “I’m talking about really bad, obviously bad journals, not those in a grey area.

December 2015

14

I’ve worked in scientific communications for 10 years and I think I know how to spot a pretty bad journal.” Without getting into the curious lack of evidence to back Murugusan’s assertion that he’s for open access, I’m including the post here because he offers four pretty good reasons why “unsuspecting or wellmeaning authors” may end up in scammy journals: 1.

They didn’t have enough funds to do thorough research. (No surprise, this is a common problem in developing countries.)

2.

They didn’t design their research project well enough. (They didn’t have anyone to guide or support them in research methodology.)

3.

They didn’t do a thorough literature review. (They didn’t have access to the journals they need to read or they might lack searching skills.)

4.

They don’t know how to write up and present research results. (English may be a second or foreign language and/or they’ve not learned how to write a research paper.)

All good reasons why a scholar might end up in a scammy journal—although there are lots more. Oddly enough, given Medium’s presumed focus on conversation, there have been no comments in more than a year.

For Sale: “Your Name Here” in a Prestigious Science Journal Charles Seife on December 17, 2014 at Scientific American—and although it’s almost certainly about a scam, it’s not really about open access. (Some of the journals mentioned are OA and in DOAJ; at least one is a subscription journal. The article does not suggest that any of these are “predatory” journals.) But let’s include it anyway… The story: there seemed to be “suspicious repetitions of phrases and other irregularities” in six of the 16 articles in one particular issue of one journal, a journal with a solid Impact Factor and pretty careful editorial oversight. This journal isn’t alone: In the past few years similar signs of foul play in the peer-reviewed literature have cropped up across the scientific publishing world—including those owned by publishing powerhouses Wiley, Public Library of Science, Taylor & Francis and Nature Publishing Group (which publishes Scientific American). The apparent fraud is taking place as the world of scientific publishing—and research—is undergoing rapid change. Scientists, for whom published articles are the route to promotion or tenure or support via grants, are competing harder than ever before to get their articles into peer-reviewed journals. Scientific journals are proliferating on the Web but, even so, Cites & Insights

supply is still unable to keep up with the ever-increasing demand for respectable scientific outlets. The worry is that this pressure can lead to cheating. The dubious papers aren’t easy to spot. Taken individually each research article seems legitimate. But in an investigation by Scientific American that analyzed the language used in more than 100 scientific articles we found evidence of some worrisome patterns—signs of what appears to be an attempt to game the peer-review system on an industrial scale.

The article offers examples of conclusion paragraphs in two different papers that sound perfectly normal (albeit with an awkward phrase) taken singly—but turn out to be nearly identical, other than specific nouns. In all, the investigation found more than a dozen articles with almost identical language—”like an esoteric version of Mad Libs.” Then there’s “Beggers funeral plot”—a phrase that turns up dozens of times in journal articles, all by Chinese researchers, and it’s a phrase that doesn’t make sense (it’s a hybrid of Begg and Eggers, apparently). What’s going on? Well, for one thing, there’s a Chinese company that offers to write scholarly papers (meta-analyses) for $15,000 or so. In other words, a paper mill. As scams go, this one—which, to repeat, has nothing to do with OA as such, since the results show up in journals with various models—may be dangerous, as it threatens the normal assumption that articles are submitted in good faith. That is the essential threat. Now that a number of companies have figured out how to make money off of scientific misconduct, that presumption of honesty is in danger of becoming an anachronism. “The whole system of peer review works on the basis of trust,” Pattinson says. “Once that is damaged, it is very difficult for the peer review system to deal with.” “We’ve got a problem here,” Filion says. He believes that the deluge is just beginning. “There is so much pressure and so much money at stake that were going to see all sorts of excesses in the future.”

Science and the AAAS sell their souls to promote pseudoscience in medicine This story, by Orac on January 5, 2015 at Respectful Insolence, and it’s another case where what might be considered a scam has nothing to do with OA journals, “predatory” or not. Instead, it’s about Science (and, earlier, Nature), and whether or not it’s a scam depends on how you feel about “alternative” medical systems. I know Beall loves to question publishers for having journals devoted to Ayurvedic medicine and other traditional medical systems—although he’s apparently not bothered by Homeopathy, an Elsevier journal, even

December 2015

15

though the basis for homeopathy being anything other than well-constructed placebos is less convincing than the evidence for some other alternative medical systems. But, you know, Homeopathy is a subscription journal from Elsevier, so it must be OK in Beallworld. (Actually, it’s hybrid, as are most Elsevier journals these days: an author willing to fork over $2,500 can make their articles about the memory of water OA.) Digressions aside, this long piece (there’s a followup post as well) is about an advertising supplement in Science, the first of a three-part series on “The Art and Science of Traditional Medicine.” I have to step aside and note that this is about an advertising supplement—being assailed on a blog where, when I first reached it, I was immediately treated to that annoying ad about weird old tricks to reduce stomach fat. But, you know, that’s just an ad, so it doesn’t reflect badly on…oh wait. The supplement has a bunch of articles. Orac comments at length (and heatedly) on the articles, here and in the followup. It turns out that I can’t quite get away with saying “it’s just an advertising supplement,” as one of the articles is by the CEO of AAAS, publisher of Science, and there’s this paragraph in the table of contents in the supplement: The content contained in this special, sponsored section was commissioned, edited, and published by the Science/AAAS Custom Publishing Office. It was not peer-reviewed or assessed by the Editorial staff of the journal Science; however, all manuscripts have been critically evaluated by an international editorial team consisting of experts in traditional medicine research selected by the project editor. The intent of this section is to provide a means for authors from institutions around the world to showcase their state-ofthe-art traditional medicine research through review/perspective-type articles that highlight recent progress in this burgeoning area. The editorial team and authors take full responsibility for the accuracy of the scientific content and the facts stated.

Whoops. If the content is scammy, Science is up to its eyeballs in the scam. The post ends: Sadly, I can’t help but conclude, Science, like Nature, has sold its soul. Nature, at least, seems to have learned from its mistake. At least it hasn’t done it again in three years. It remains to be seen how low Science will go. After having skimmed the articles that require further discussion, I shudder to go deeper, and I await with trepidation the next two segments in this ad-fest. For shame, Science. There is no excuse. Cites & Insights

That’s the post (as noted above, there’s a second part posted the next day). But I noticed that, when I got to that point, the scroll bar showed that I was much less than a tenth of the way through. That’s because there are 635 comments. A lot of them involve one commenter’s insistence that marketing of vitamin supplements is a More Important Issue than touting traditional medicines, and therefore…well, therefore nothing, really, unless you believe that This Important Topic should rule out all discussion of anything else.(There’s also a true believer in “natural” remedies from Tennessee, but that’s another set of oddities.) I made it through 150 comments before giving up…the stream seems endless. Perhaps the best brief comment also applies nicely to that Elsevier journal: “Science journals are shilling for the placebo effect.” Except that the placebo effect doesn’t wipe out rhinoceri and tigers. And so ends this odd set of real and possible scams, some very much connected to subscription journals, some related to OA, some across the board.

Signs! OK, so the exclamation point’s a little silly this time around—heading up a group of items relating to signs of ethical problems. I should also point to Chapter 7 of Open-Access Journals: Idealism and Opportunism, but that won’t be openly accessible until next summer. I notice that both a certain Loon and a certain Kitchen are represented more than once in this subtopic. So it goes. I don’t always agree with the loon…and I don’t always disagree with the kitchen.

Unethical journals tell academics to pad their papers with citations This piece by Justin Norrie posted February 5, 2012 at The Conversation, is probably the oldest in this roundup: I simply hadn’t encountered it previously. It’s a tricky sign because it’s more likely to turn up in the review process, after an author already has a partial commitment to a journal. The sign is right in the headline—and it’s apparently pretty widespread: One in five academics from a range of fields say they have come under pressure from journals to pad out their papers with unnecessary citations to get published, a large survey has found. Analysis of 6,672 responses from US academics working in the areas of economics, sociology, psychology and business showed that 40% knew about the practice of “coercive citation”. Half of those had been on the receiving end, according to researchers from The

December 2015

16

University of Alabama in Huntsville’s College of Business Administration who carried out the survey.

This sign has nothing to do with OA and a lot to do with impact factors, at least if the coercion is to add citations to articles in the same journal, as seems likely. I have seen questionable OA journals that actually say in their author instructions that some percentage of citations or number of citations should be to other articles in the same journal. I say questionable without scare quotes because, if I see that, I immediately flag the journal as C-level, “highly questionable and probably best avoided.” There aren’t a lot of them—I’d say fewer than a dozen and maybe fewer than six—but there are some.

Common scam-journal red flags: archiving policies, indexing policies, and article count Here’s the Library Loon, on July 29, 2014 at Gavia Libraria. She highlights three phenomena “that all by themselves often suffice to mark a journal that no one should bother with.” Read the post for full descriptions, but in brief:  Ludicrous claims about archiving—such as saying that “depositing” with Google Scholar constitutes archiving.  Ludicrous claims about indexing—such as statements that suggest a journal is indexed in certain places while not actually saying so (or “pure snow jobs” such as claiming indexing in Ulrich’s, which is a directory, not an index).  Article count—this one’s a little tricky, as there are some specialized journals that just aren’t going to have many articles. Still, for a journal in any reasonably broad field, her example (seven articles in two years) is pretty lackluster. I’ll give her the first two. The third one? Maybe.

Adventures With Predatory Publishing: A Tale of Two Journals Peter Burns is the publisher at Allen Press and posted this sometime in 2014 (I tagged it on November 29, but it could be older) in FrontMatter. Allen Press publishes half a dozen society-owned journals. One morning, I received an e-mail from Dr. Jack Yu, editor of The Cleft Palate–Craniofacial Journal. He was forwarding a message from Alice Wills of the American Society of Science and Engineering. “The purpose of this e-mail is to inquiry about the possibility of cooperation with your journal,” Alice wrote. “In the mutual-benefit cooperative relationship, we can do publicity, promotion and collect papers for your journal, and we can guarantee the quantity and quality of the papers we provide. Moreover, we will also pay the publication fee if any.” Cites & Insights

Huh? What exactly was Alice saying? Was she offering to publish the journal? Well, the journal already has a publisher—Allen Press. While we promote the journal, we don’t provide papers to the editorial office. Moreover, anyone who guarantees the quantity or quality of manuscripts is claiming some remarkable powers. Is there a manuscript broker out there hustling to meet a quota? And isn’t peer review the quality-control process for journals? Who can guarantee the results of peer review? I also had to wonder what’s in this for Alice—presumably she wants something in return, right?

Yes, the “society” was on Beall’s list; yes, Burns takes the list at face value as a list of predatory publishers (without even the qualifiers Beall now uses). Burns told Yu to ignore it, and that was the end of the matter—but later that day encountered something else. The “Photon Foundation” was sending spam to members of the American Society of Parasitologists inviting submissions for Journal of Parasitology. One of the scammy emails was forwarded to Burns from Dr. Michael Sukhdeo, editor of The Journal of Parasitology— which is copublished by the society and Allen Press. The e-mail included a link to “Journal of Parasitology” (with “The” omitted from the title) on Photon’s website. One of the first things I noticed was the journal’s “impact index.” Elsewhere on Photon’s site, I found an explanation: “Impact Index is new generation impacting system…Impact Index ensures multi-fold peer reviewing and meritorious research articles…Impact Index provides strong weightage to quality of research articles published in a journal.” The journal’s aims and scope are brief: “Journal of Parasitology accepts manuscripts dealing with advancement of Parasitology to serve the domain better.” Submission instructions are even more concise: “Attach your word file with e-mail and send it to [email address].” While the Photon site doesn’t use the term “open access,” it does say that articles will be published online “as absolutely free access to readers.” It provides “useful links” to resources such as PubMed, The International Plant Name Index, and the United States Patent and Trademark Office. Readers who scroll down will see solicitations for papers, books, and submissions to various awards programs before finally, about threequarters of the way down the page, some articles appear. Only two PDF articles were available when I first checked the site; a recent visit to the page showed eight articles, although only five would download. The other three links go to other pages on the site.

Technically, there’s neither copyright nor law violation here: The Journal of Parasitology is not the same as Journal of Parasitology, and in any case you can’t

December 2015

17

copyright a title. But it is surely both unethical and a sign that the publisher’s scammy. It’s also cheap: the journal site is a Google Site— the “foundation” didn’t even spend the money to establish a domain and website. When Burns complained to Google that it should shut down the phony journal, Google declined (appropriately, I suspect: no law was being broken). These folks prey on researchers in several ways: •

By creating new open-access journals, but conducting little or no peer review. These “journals,” then, are nothing of the sort as there is no quality control. They simply publish whatever manuscripts they receive from authors, after also collecting their fees. Publication fees are often not publicized, so authors may not be aware of the charges until they receive an invoice.



By impersonating a legitimate journal with a fake website and solicitous e-mails.



By compounding the fraud with false journal metrics and stats, such as an “impact index” or an “ISJN,” as opposed to the real-life impact factor or ISSN.



By making up names of editors and editorial board members, or using names of actual people without their permission.

just attach an article to a reply email without further investigation. Third…well, if you go to the Photon Foundation site or any of the journal homesites, can you believe that this is a real publisher or that these are real journals? In what universe?

Should We Retire the Term “Predatory Publishing”? Yes. Next question? Oh, right: Rick Anderson posted this on May 11, 2015 at the scholarly kitchen. He agrees. (And, in his discussion of Beall’s lists and the like, he gets the key number on OA and charges right: “the majority of OA articles published each year are funded by authorside charges.” [Emphasis added.] That’s a true statement, because APC-charging OA journals tend to publish more articles than the larger number of nofee OA journals.) This question has become relevant because of that common refrain heard among Beall’s critics: that he only examines one kind of predation—the kind that naturally crops up in the context of author-pays OA. What about toll-access publishers that jump on the OA bandwagon “just… for the fees,” or who publish fake journals themselves? What about publishers who simply do an unconscionably poor job of fulfilling their obligations to authors, or who unethically leverage their monopoly power to maximize revenue at the expense of libraries—a practice some characterize as “predatory pricing”? And what about the authors who intentionally use the services of fraudulent publishers in order to deceive their colleagues or employers, or who engage in dishonest manipulation of the peer-review process? Aren’t they “predators” as well?

This does seem to be a pretty clear case of unethical “publishing,” but the laundry list here may or may not apply (and, of course, an ISSN says nothing about the validity or quality of a journal). There’s a lot more to the post, but it gets tricky. For one thing, Burns relies heavily on Beall. For another, he says flatly that “there is no ‘white list’ to serve the opposite purpose of Beall’s ‘black’ list,” and while he has a paragraph on DOAJ, he focuses on the Bohannon sting and the fact that some (few) DOAJ journals are also on Beall’s lists. Burns closes: Much has been made lately of the issue of trust in scholarly publishing. With the growth in predatory publishers, the best advice is to follow the age-old dictum: trust, but verify.

I wouldn’t argue with that—and would suggest that it applies equally well to the notion that everything on Beall’s lists is “predatory.” As you might expect, “Photon Foundation”‘s journals don’t appear in DOAJ—and the sites (or, rather, pages within one Google Sites site) are ghastly; I can’t believe any author or reader would mistake these for journals of any sort. The signs, in this case? First, unexpected email inviting an article. Second, suggesting that the author Cites & Insights

It may be true that these behaviors deserve exposure and shaming just as much the behaviors of those publishers branded “predatory” by Beall’s List do. The problem is, all of these behaviors are different enough from each other that I’m not sure lumping them all together under the epithet “predatory” is useful. (On whom is a lazy peer reviewer “preying”?)

Here we have some signs of questionable publishing practices—signs that are present in some journals using any business model you might care to name. Here’s a nice list from a bit later in the post, discussing “scholarly bad faith”:

. Attempting to deceive authors into paying for nonexistent or shoddy editorial services 2.

Selling authors fake or meaningless credentials in order to help them deceive their peers

3.

Deceiving one’s peers by purchasing fake or meaningless publishing credentials

4.

Publishing journals or books (whether on an OA or a toll-access basis) that are presented to the marketplace

December 2015

18

as rigorous and scholarly, but consist in fact of whatever nonsense or garbage authors may wish to submit 5.

Leveraging monopoly power excessively to exact maximum revenues from academic customers

6.

Taking advantage of one’s role as, say, the certifier of academic programs to require that those programs have access to one’s commercial products

7.

Stacking a “big deal” package with weak or sub-par journals in order to inflate those journals’ usage data and/or justify otherwise indefensible price increases

Now there’s a set of signs. There’s more to the post and it’s generally good. You may also want to read the 89 comments— although it’s interesting that the first, by Kent Anderson, immediately shifts the focus back to OA and only OA. And, of course, KA doesn’t accept that #5 is bad faith at all. (Neither do David Crotty and some others: after all, the publisher’s just “maximizing shareholder value,” which seems to have become the Holy Grail and sole ethical concern of too many commercial firms.) You may find the comment stream unusually revealing about some of the commenters involved. There’s way too much praise for Beall’s list for my taste, but also some negative commentary about it; unfortunately, in at least one case RA essentially responds “then go do your own blacklist.” It’s fair to assume that this post and the comments that resulted lead up to the next item:

Deceptive Publishing: Why We Need a Blacklist, and Some Suggestions on How to Do It Right Rick Anderson posted this on August 17, 2015 at the scholarly kitchen. Rick Anderson usually writes well and thinks well, which doesn’t mean I always agree with him—and I certainly didn’t this time. I agree that “deceptive publishing” is a good replacement for “predatory publishing”—although even then I wonder whether most authors who publish in “deceptive” journals are being deceived by them. RA (I’ll use his initials to distinguish him from another “chef” with the same last name) breaks “deceptive” publishing down into four segments—and it’s worth noting right off the bat that RA is not limiting his discussion to OA journals:  Phony journals such as the “Australasian” series from Elsevier (RA’s example). He thinks these are relatively rare; he’s probably right.  Pseudo-scholarly journals “that falsely claim to offer authors real and meaningful editorial services (usually including peer review) and/or credible impact credentialing (usually in the form of an Impact Factor), and thereby also Cites & Insights

falsely claim to offer readers rigorously vetted scientific or scholarly content. In this case, the content may or may not be legitimate scholarship—but the journal itself is only pretending to provide the traditional services of peer review and editorial oversight.” He says this is “perhaps the largest category of deceptive publisher and also one of the more controversial ones, since the line between dishonesty and simple ineptitude or organic mediocrity can be fuzzy.” Sometimes, it’s clear enough; sometimes, not so much.  False-flag journals that deliberately set out to trick authors into believing they’re submitting to some existing legitimate journal.  Masqueraders—journals with titles that “imply affiliation with a legitimate- and prestigioussounding scholarly or scientific organization that does not actually exist.” So far, so good: that’s a decent typology, although I might add “journals” and “publishers”: that is, journal titles and publisher names that don’t actually represent any serious publishing activity at all. I believe that group may be larger than pseudo-scholarly journals—possibly considerably larger. The next section, “Why a Blacklist?,” starts to get trickier. I certainly agree that it makes sense to focus on journals rather than publishers (there have been deceptive journals from some of the most respected publishers), and I think this is an excellent question-and-answer combination: What’s the difference between either an inept publisher or a legitimate publisher of low-quality content, and a truly predatory or deceptive publisher? The difference lies in the intent to deceive. Deceivers are doing more than just running their journals badly or failing to attract high-quality content. In some cases they are lying about who they are and what they’re doing; in others, they are promising to do or provide something in return for payment, and then, once payment is received, not doing or providing what they promised. Some focus their deceptive practices on authors, some on readers, and some on both.

Then we get RA’s specific answer to the question of why we need blacklists: Whitelists are good and important, but they serve a very different purpose. For example, a publisher’s absence from a whitelist doesn’t necessarily signal to us that the publisher should be avoided. It may be that the publisher is completely legitimate, but has not yet come to the attention of the whitelist’s owners, or that it is basically honest but doesn’t quite rise to whatever threshold of quality or integrity the whitelist has set for

December 2015

19

inclusion, or that it is still in process of being considered for inclusion. The function of a blacklist is also important, but it’s very different. Among other things, it acts as a check on the whitelist—consider, for example, the fact that just over 900 questionable journals were included in the Directory of Open Access Journals (currently the most reputable and well-known OA whitelist on the scene) until its recent housecleaning and criteria-tightening. If Walt Crawford hadn’t done the hard work of identifying those titles (work that, let’s remember, was made possible by Beall’s list), at what point, if ever, would they have been discovered and subjected to critical examination?

In the first place DOAJ’s criteria-tightening was not in any way occasioned by my investigation: it was and is a wholly separate effort, one that began a long time before I started my investigation—and resulted in new criteria and a new application form that appeared before I’d completed or published my investigation. DOAJ’s good, but I find it hard to believe that my July 2014 publication (appearing in June) could have influenced their activities in December 2012 through April 2014. In the second place, I did not find over 900 questionable journals in DOAJ—I found over 900 journals that appeared both in DOAJ and on Beall’s blacklists. Since I do not agree that everything on Beall’s blacklists is questionable, I can’t agree with that characterization. Also, to be sure, I did not identify the titles: that wasn’t my purpose. My work was an examination of Beall’s list—only in that sense was it “made possible” by the list. (Added October 28, 2015: Now that MDPI has been removed from the blacklist, the “900 or so” is down to “775 or so,” and I suspect it should be a lot lower than that.) If you agree with RA that blacklists are a good thing (which I don’t, and which I find philosophically difficult from a librarian’s perspective, but of course I’m not a librarian), his set of criteria for a good blacklist seem reasonable. So do his suggested roadblocks. Why is this in the SIGNS! group? Because his typology is good and it didn’t seem to fit anywhere else. I think the post is worth reading—and that you should also read the 114(!) comments. The very first one, from Phill Jones, notes (among other things): What worries me about blacklists is that I’m not aware of any other industries that have managed their quality control issues in this way. I could be wrong as I’ve only asked a couple of people in different industries but sectors like engineering and medicine rely heavily on certification and badges.

RA does offer one example, blacklists of diploma and accreditation mills. But isn’t a whitelist of legitimate Cites & Insights

accreditation agencies more useful? As with journals too new to be in DOAJ, if something’s not in the whitelist, you know to ask probing questions. I’m involved in the comment stream, raising some of the same issues I raise here. It should be clear that most of my interchanges with RA are cordial, not combative, on both sides. (My interchanges with some of the other “chefs” may be less cordial. Yes, I still resent David Crotty’s term “cherrypicking.”)

Think. Check. Submit. (How to Have Trust in Your Publisher.) This post by Charlie Rapple on October 1, 2015 at the scholarly kitchen serves as a useful introduction to the TCS campaign (Think. Check. Submit.) It links to a site—but the site was still, on October 15, 2015, an ad/parking page. I could say “this is not the best way to inspire confidence in a new initiative.” (NOTE: As I edit this, on October 26, 2014, the Think. Check. Submit. website is now populated and seems useful— but it should not have been publicized until it existed.) According to the post, TCS is “co-ordinated by ALPSP DOAJ, INASP, ISSN, LIBER, OASPA, STM, UKSG and individual publishers…” (I must admit that I never thought of ISSN as something other than a numbering scheme, but apparently there is an international centre.) My original notes here had to do with the desirability of avoiding a page full of ads where a website should be: you don’t publicize the site until it’s there. Now that it is, I’ll modify the notes: The purpose of TCS is to provide a set of signs for whitelisting: “to help researchers learn who they can trust when they are seeking to publish their work.” There’s “a checklist of what to look for” including: [C]an you tell who publishes the journal? Have you read any of its papers? Do your colleagues recognize it? How do the existing articles fit with your work?

Oddly, none of the commenters chooses to mention that there was no website for at least the first two weeks of October; maybe nobody clicked through? I think Think. Check. Submit could be at least mildly useful, even though it got off to a bad start.

They lost the Loon at “think.” This October 7, 2015 post by Library Loon at Gavia Libraria is a little snarky, but I’m not sure it’s entirely wrong, much as I’d like to believe that. Unfortunately—whether [TCS] is a design-by-committee compromise or someone got the bit in their teeth; there could be many reasons for this and none of them is malice—its execution leaves a great deal to be desired.

December 2015

20

Just the tagline is unappealing enough; “think” is emphatically not what a harried academic looking for a likely journal wants to do, and “check” is even worse! (“Submit” has somewhat less-than-savory overtones, but the Loon supposes this was not easily avoidable, as for good or ill that is the standard verb.) What harried academics (who are not cynically using scam journals to feed wrongheaded evaluation mechanisms) want is to avoid being snookered. Moreover, they want that to happen with the least possible thought or effort on their part. If academics felt wholly confident in their ability to avoid scam journals, they would not incessantly demand that such journals be shut down, to the point of pitching a giant moral panic about the phenomenon. If they did not mind thinking a little harder about their journal choices, a certain list would not have nearly the notoriety it does.

Based on the rest of the post, where the Loon calls the checklist “immensely too onerous, too nitpicky, too sterile, and too hard to remember for the standard-issue harried academic to do anything but click immediately away from,” there must have been something substantive on the TCS site at some point between October 1 and October 7; it’s odd that it would have been removed in favor of a parking page with ad links. Anyway, the rest of the post is worth reading— and supports my own feeling (as reflected on page 33 of Open-Access Journals: Idealism and Opportunism): what you need is a series of questions that results in opting out as soon as you get a bad answer. My set has eleven questions (with three more to consider at some point), but in most cases I suspect the first two or three are enough—and, of course, in the near future “Is the title in DOAJ?” will be Question Zero—if the answer isn’t Yes, then there has to be an awfully good reason to continue.

Beyond Beall’s List This March 2015 College & Research Libraries News article by Monica Berger and Jill Cirasella has “Better understanding predatory publishers” as a subhead. Berger and Cirasella give the usual credibility to and praise for Beall’s list and use predatory without scare quotes, but there’s sensible stuff here. For example: Of course, low-quality publishing is not new. There have long been opportunistic publishers (e.g., vanity presses and sellers of public domain content) and deceptive publishing practices (e.g., yellow journalism and advertisements formatted to look like articles). It is also not unique to OA journals. There are many mediocre subscription-based journals, and even respected subscription-based journals have accepted Cites & Insights

deeply problematic submissions (e.g., Andrew Wakefield et al.’s article linking autism to vaccines in The Lancet and Alan Sokal’s nonsense article in Social Text).

Still, they speak of an “explosion” of predatory publishing—and I’m not entirely satisfied that there is such an explosion, if by “publishing” one means actually producing journals containing articles. Calling Beall’s attitude toward OA “complicated, and not entirely supportive” is a little like calling a cast-iron skillet not entirely white. Citing some of my notes on Beall, the authors seem to treat this as “politics,” which is odd—but they do take Beall to task, at least slightly. It’s a useful piece, including some of the items required for DOAJ inclusion and some of the signs supposedly used by Beall in his lists. No comments, as it’s part of a periodical (one that, as with C&RL, ACRL’s peer-reviewed journal, is gold OA with no APC).

Legacy Publishers! A bunch of items relating to ethical issues with legacy publishers—that is, publishers that predominantly publish subscription journals. It’s somewhat of a hodgepodge (as opposed to the crystal-clear organization of other subtopics).

Journals without editors: What is going on? Deevy Bishop posted this on February 1, 2015 at BishopBlog. It relates to an Elsevier journal, Research in Autism Spectrum Disorders, RASD, and certain practices of the editor of that journal, Johnny Matson—and Bishop’s presence on the editorial board. Bishop wasn’t aware she was on the RASD editorial board, but admits the she might not have remembered accepting an invitation. I suspected that I’d agreed to serve on the Editorial Board in the course of [an] email exchange, but I had no details of this on file. Journals vary considerably in how far they treat their Editorial Board as emblematic, and how far they actually make use of the board in decision-making. I had barely had any interaction with Research in Autism Spectrum Disorders over the years, and had remained sufficiently unaware of my Editorial Board role to omit this from my curriculum vitae. But there I was, as Michelle had noted, listed as a member of the Editorial Board on the journal website.

She had to take action because she signed on to the Cost of Knowledge protest and resigned from the editorial boards of Elsevier journals as part of that protest. But she also found Matson’s history troubling: he’s an author on more than 10% of RASD articles since 2007. And he has an astonishingly high H-index—with more than half of his citations being self-

December 2015

21

citations. (There’s a revealing table showing percentage of self-citations for top scientists in autism/intellectual disabilities: nobody else even hits 10%.) I wrote to Matson to query my editorial board status and to explain why I wished to resign and received a curt reply, confirming that I had agreed to be on the Editorial Board, but he would remove me. Nevertheless, my name remained on the Editorial Board list of the journal website for a while. But then, a few weeks ago, there was a new development. For both RIDD and RASD, the pages on the journal website showing information about the Editors and Editorial Board disappeared. What, I wonder, is going on?

There are some updates, including useful additional commentary and this: P.S. I am hearing on the grapevine that people have had papers accepted in RASD and RIDD without reviewing, but nobody seems willing to say that publicly. If there are any brave souls out there who are prepared to speak out, please can you do so via Comments. Thanks.

Unquestionably, a number of articles (by another prolific author) were accepted rapidly: seven accepted within a day of submission, seven more within a week. As of this writing, not surprisingly, RASD shows an entirely different editor-in-chief and five associate editors. The other journal shows a different editor and a larger editorial board. The RASD and RIDD story continues in “The games we play: A troubling dark side in academic publishing,” by Pete Etchells and Chris Chambers on March 12, 2015 at The Guardian. (Based on this article, I’m assuming “Deevy” is a nickname for Dorothy, but I could be conflating two Bishops.) It adds a lot more detail regarding Matson’s papers and other issues. For example, the median time between submission and acceptance for 32 papers with Matson as a co-author, in a different journal, was one day—and for 73 RASD/RIDD papers coauthored by three other researchers, 17 were accepted on the day they were received and 43 in all were accepted within two days. Oh, and when contacted, the coauthors said this was standard practice: The figures you state for 73 papers is routine practice for papers published in RIDD and RASD. A large percentage of all papers published in any given issue of RIDD and RASD appear to have received a rapid rate of review as indicated would happen in the official editorial policy of these journals.

There’s considerably more here—including some unfortunate indications that some folks choose to attack the messenger. Cites & Insights

Bishop herself (based on the comments, this is Dorothy Bishop: she uses “Deevy” as a pen name for her humorous crime novels) has a followup on March 21, 2015 at BishopBlog: “Will Elsevier say sorry?” She goes into more detail about what was happening at RIDD and goes further: It is difficult to believe that nobody at Elsevier was aware of what was going on. In 2011, at the height of the rapid turnaround times, there was a fivefold increase from the 2004 level of submissions. Many journals have grown in size over this period, but this was massive. Furthermore, the publisher was recording the dates of receipt and acceptance with each paper: did nobody actually look at what they were publishing and think that something was odd? This was not a brief hiccup: it went on for years. Either the publisher was slumbering on the job, or they were complicit with the editor.

Are commercial publishers wrongly selling access to openly licensed scholarly articles? Timothy Vollmer asks that question in a March 13th, 2015 post at Creative Commons. Ross Mounce, a postdoc at the University of Bath, recently wrote about how Elsevier charged him $31.50 for an “open access” research article licensed under a Creative Commons Attribution-NonCommercialNoDerivs (BY-NC-ND) license. Mounce was understandably upset, because the article was originally published by another publisher – John Wiley – and was made available freely on their website. Elsevier’s act of charging for access initially appeared improper because of Wiley’s use of a noncommercial license. This situation has sparked a debate among supporters of Open Access about whether or not Elsevier violated the terms of the BY-NC-ND license, and whether articles that are intended to be distributed freely can end up locked behind paywalls. This isn’t the first time this has happened; Peter Murray-Rust documented another instance of it last year. This kind of situation can leave researchers questioning why they should invest in ensuring that their research is distributed for free if another publisher can simply turn around and sell it – especially if the article carries a Creative Commons license that is supposed to restrict commercial use. Mounce complained to Elsevier about the arrangement, and as of March 9, they’ve removed the pay from the article and promised Mounce a refund. A representative from Elsevier claimed “there was some missing metadata for some of the OA articles,” thus apparently allowing for users to be charged for access to those openly licensed articles. Elsevier said it will investigate and reimburse others who purchased access to those articles on the Elsevier site during the time that the paywall was up. At the same

December 2015

22

time, Elsevier has hinted that it has the right to sell access to BY-NC-ND articles it holds because of a separate license they get from the author.

Those are the first two paragraphs. I could legitimately quote the whole thing (as you’d expect, it has a CC BY license) but won’t: it’s worth reading on its own. As is usual with Elsevier, once the problem was made clear, it got fixed, sooner or later. Mounce was on a roll, as evidenced by the following (this may be a partial list):

Wiley are charging for access to thousands of articles that should be free Ross Mounce in a March 26, 2015 post on A blog by Ross Mounce—and this time it’s a case of a subscription journal that allows access after a tiny little three year embargo. Seems Wiley wasn’t honoring the window, as demonstrated in a screenshot showing a 1974 article from the journal for which Mounce paid $45.60. An update later that day says Wiley’s fixed the problem but that “these articles were wrongly on sale for 2 months and 26 days.” In comments, Wiley people dispute that assertion, saying it was a problem that persisted for “a manner of hours.” Wiley said it would refund any incorrect charges. (Just for interest, I attempted to download a four-year-old article in the journal. It downloaded without question or incident.)

Springer caught red-handed selling access to an Open Access article Same author, same blog, April 27, 2015—and this time it’s an OA article in a “hybrid” journal. (The article was freely available on the publisher’s website but for sale on Springer’s site.) I don’t actually care whether this is technically ‘legal’ any more. That doesn’t matter. This is scammy publishing. I want a refund and I will be contacting Springer shortly to ask for this. The author also hopes I get a refund – he wanted his article be open access, not available for a ransom: @rmounce @noamross Should I expect some royalties from the sale of my Open Access paper? I hope you ask for & receive a refund!—Luis A. Apiolaza (@zentree) April 27, 2015 Frankly, I’m getting tired of writing these blog posts, but it needs to be done to record what happened, because it keeps on happening. I really think we need to setup a PaywallWatch.com c.f. RetractionWatch.com to monitor and report on these types of incidents. It’s clear the publishers don’t care about this issue themselves – they get extra money from readers by making these ‘mistakes’ and no financial penalty if anyone does spot these mistakes. Calculated indifference. Cites & Insights

Are these known incidences just the tip of the iceberg? How do we know this isn’t happening at a greater scale, unobserved? There are more than 50 million research articles on sale at the moment. Perhaps in small part this explains the obscene profits of the legacy publishers? It’s yet another nail in the coffin for hybrid OA – we simply can’t trust these publishers to keep this content open and paywall-free.

I’m quoting more than usual here because I think Mounce makes a compelling point, especially about “hybrid” OA. (Yes, Mounce’s blog is CC-BY.)

Springer admits wrongdoing in selling open access article Same author, same blog, May 7, 2015—and in this case, Springer apologized “for not having been more thorough and vigilant.” For more, see the article.

Stepping back from sharing Kevin Smith (on May 4, 2015 at Scholarly Communications @ Duke) writes about Elsevier’s new policies regarding author rights—and Smith’s not happy: This is a retreat from open access, and it needs to be called out for what it is.

Briefly: since 2004 Elsevier has allowed immediate deposit of final accepted manuscripts in institutional repository—but in 2012 it added a clause forbidding such deposits at institutions with OA mandates. That didn’t work, so Elsevier now has a “simplified” methodology: Two major features of this retreat from openness need to be highlighted. First, it imposes an embargo of at least one year on all self-archiving of final authors’ manuscripts, and those embargoes can be as long as four years. Second, when the time finally does roll around when an author can make her own work available through an institutional repository, Elsevier now dictates how that access is to be controlled, mandating the most restrictive form of Creative Commons license, the CC-BY-NC-ND license for all green open access.

Oh, and you have to check a 50-page list to see the actual embargo for delayed green for your journal— and many U.S. and European journals have embargoes of 24, 36 or 48 months. (Not for journals in PubMed Central: the maximum legally allowed there is 12 months.) The rapid growth of open access policies at U.S. institutions and around the world suggests that more and more scholarly authors want to make their work as accessible as possible. Elsevier is pushing hard in the opposite direction, trying to delay and restrict scholarly sharing as much as they can. It seems clear that they are hoping to control the terms of such

December 2015

23

sharing, in order to both restrict it putative impact on their business model and ultimately to turn it to their profit, if possible. This latter goal may be a bigger threat to open access than the details of embargoes and licenses are. In any case, it is time, I believe, to look again at the boycott of Elsevier that was undertaken by many scholarly authors a few years ago; with this new salvo fired against the values of open scholarship, it is even more impossible to imagine a responsible author deciding to publish with Elsevier.

Comments begin with some ad hominem stuff from Beall and continues with an odd comment from Elsevier’s general counsel. Mike Taylor is also involved. Others also commented on the new Elsevier policy. A May 20, 2015 press release from SPARC is entitled “New Policy from Elsevier Impedes Open Access and Sharing,” incorporating a statement from 23 groups opposing the new Elsevier policy: On April 30, 2015, Elsevier announced a new sharing and hosting policy for Elsevier journal articles. This policy represents a significant obstacle to the dissemination and use of research knowledge, and creates unnecessary barriers for Elsevier published authors in complying with funders’ open access policies. In addition, the policy has been adopted without any evidence that immediate sharing of articles has a negative impact on publishers’ subscriptions. Despite the claim by Elsevier that the policy advances sharing, it actually does the opposite. The policy imposes unacceptably long embargo periods of up to 48 months for some journals. It also requires authors to apply a “non-commercial and no derivative works” license for each article deposited into a repository, greatly inhibiting the re-use value of these articles. Any delay in the open availability of research articles curtails scientific progress and places unnecessary constraints on delivering the benefits of research back to the public. Furthermore, the policy applies to “all articles previously published and those published in the future” making it even more punitive for both authors and institutions. This may also lead to articles that are currently available being suddenly embargoed and inaccessible to readers. As organizations committed to the principle that access to information advances discovery, accelerates innovation and improves education, we support the adoption of policies and practices that enable the immediate, barrier free access to and reuse of scholarly articles. This policy is in direct conflict with the global trend towards open access and serves only to dilute the benefits of openly sharing research results. We strongly urge Elsevier to reconsider this policy and we encourage other organizations and individuals to express their opinions. Cites & Insights

In “Publisher pushback puts open access in peril,” written by Virginia Barbour and published May 21, 2015 at The Conversation, we get an Australian perspective that’s fairly similar, although the article (and even more so some of the comments) make the usual error of assuming that Gold OA means APCs (or at least “typically” does). Stevan Harnad speaks up in opposition to the new Elsevier policy in “Elsevier: Trying to squeeze the virtual genie back into the physical bottle” on May 25, 2015 at Open Access Archivangelism—and if you can ignore Harnad’s crapola about “Fool’s-Gold OA,” it may be worth reading. My patience for Harnad’s rhetoric is, I’m afraid, limited, as is my belief in the “then MAGIC HAPPENS” scenario in which green OA shortly leads to a publishing revolution. The story continues back at Scholarly Communications @ Duke, with Kevin Smith’s May 15, 2015 piece “From control to contempt.” It’s partly about the Elsevier situation, but adds some other issues. Best read in the original—and a professional society, ASME, comes off looking much worse than Elsevier. (Briefly, ASME is apparently defining articles as works for hire, which places all copyright control in ASME’s hands, and even requires the author to waive moral rights.) The final paragraph: To me, this agreement is the epitome of disrespect for scholarly authors. Your job, authors are told, is not to spread knowledge, not to teach, not to be part of a wider scholarly conversation. It is to produce content for us, which we will own and you will have nothing to say about. You are, as nearly as possible, just “chopped liver.” It is mind-boggling to me that any self-respecting author would sign this blatant slap in their own face, and that a member-based organization could get away with demanding it. The best explanation I can think of is that most people do not read the agreements they sign. But authors — they are authors, darn it, in spite of the work for hire fiction — deserve more respect from publishers who rely on them for content (free content, in fact; the ASME agreement is explicit that writers are paid nothing and are responsible for their own expenses related to the paper). Indeed, authors should have more respect for themselves, and for the traditions of academic freedom, than to agree to this outlandish publication contract.

The focus returns to Elsevier’s new policy in “A distinction without a difference” (same author, same blog, May 29, 2015). There were a bunch of list and blog posts and tweets about the new policy, with Elsevier folks actively engaged in the discussions, leading to: As I read one of the most recent messages from Dr. Alicia Wise of Elsevier, one key aspect of the new

December 2015

24

policy documents finally sunk in for me, and when I fully realized what Elsevier was doing, and what they clearly thought would be a welcome concession to the academics who create the content from which they make billions, my jaw dropped in amazement. It appears that Elsevier is making a distinction between an author’s personal website or blog and the repository at the institution where that author works. Authors are, I think, able to post final manuscripts to the former for public access, but posting to the latter must be restricted only to internal users for the duration of the newly-imposed embargo periods. In the four column chart that was included in their original announcement, this disparate treatment of repositories and other sites is illustrated in the “After Acceptance” column, where it says that “author manuscripts can be shared… [o]n personal websites or blogs,” but that sharing must be done “privately” on institutional repositories. I think I missed this at first because the chart is so difficult to understand; it must be read from left to right and understood as cumulative, since by themselves the columns are incomplete and confusing. But, in their publicity campaign around these new rules, Elsevier is placing a lot of weight on this distinction.

There is, as you can imagine, a lot more that I could quote, and I don’t believe this story has run its course, but I’ll close this cluster with “Journal substitutability, hassle factor, and green open access” on May 29, 2015 at Gavia Libraria by the Library Loon. The Loon links to commentaries by Mike Eisen and Mike Taylor and questions their assertions that the new Elsevier policy threatens green OA. She questions the assertion because she doesn’t think Elsevier “can make this stick,” given its inability to make similarly repressive policies stick. But that’s not why I’m including the piece. No: it’s because of a really nice section about substitutability. Journals aren’t substitutable for libraries—if you need an article in Science, your subscription to Nature probably won’t help—but:

The rest of the post discusses why, in many cases, this really is a distinction without a difference. When I cited SPARC earlier in this multi-citation discussion, I should have mentioned that the statement appears on the website of a primary cosigner, COAR, the Confederation of Open Access Repositories. After further discussion and a commentary by Elsevier’s Alicia Wise, COAR posted “Re COARrecting the record” on May 28, 2015—noting that the statement now has 700 signatories (groups and individuals). Unfortunately, the comments that yielded this response—supposedly “at the bottom of the page”—don’t seem to be there. Meanwhile, COAR offers some “concrete recommendations for Elsevier to improve their policy”:

If this choice-by-elimination process is broadly typical, and the Loon has no reason to think it isn’t, it presents a problem for Elsevier’s anti-open-access tactics, because Elsevier constantly and consciously designs those tactics in ways that make it far more likely authors will eliminate their journals from consideration because of hassle factor—coauthor hassle factor, journal-style hassle factor, “keep the funder happy” hassle factor, speed of publication hassle factor, and so on.

1.

Elsevier should allow all authors to make their “author’s accepted manuscript” openly available immediately upon acceptance through an OA repository or other open access platform.

2.

Elsevier should allow authors to choose the type of open license (from CC-BY to other more restrictive licenses like the CC-BY-NC-ND) they want to attach to the content that they are depositing into an open access platform.

3.

Elsevier should not attempt to dictate author’s practices around individual sharing of articles. Individual sharing of journal articles is already a scholarly norm and is protected by fair use and other copyright exceptions. Elsevier cannot, and should not, dictate practices around individual sharing of articles.

Cites & Insights

What the Loon wants to suggest is that for authors, journals are indeed substitutable—not infinitely so, to be sure, but enough to matter…

She recounts a “last journal standing” situation in which she and some article coauthors narrowed down an LIS submission based on things like OA friendliness and other issues.

As the Loon points out, the “glamour journals” (two mentioned above) publish such a tiny percentage of all scholarly articles that they’re really not issues. An interesting discussion, and a good place to close this.

The Medical Journal of Australia vs Elsevier Here’s a juicy one, by Matt Wedel on May 6, 2015 at Sauropod Vertebra Picture of the Week. The short form: the company that publishes this journal—wholly owned by the Australian Medical Association—decided to outsource much of the production to Elsevier. The editor-in-chief raised concerns about this. He was sacked for his trouble. After Leeder was pushed out, his job was offered to MJA’s deputy editor, Tania Janusic. She declined, and resigned from the journal, as did 19 of the 20 members of the journal’s editorial advisory committee. (Some accounts say 18. Anyway, 90%+ of the committee is gone.) When we first discussed the situation via email, Mike wrote, “My take is that at the present stage of the OA transition, editorial board resignations from

December 2015

25

journals controlled by predatory legacy publishers are about the most important visible steps that can be taken. Very good news for the world, even though it must be a mighty pain for the people involved.”

Some links to additional coverage and a closing comment that’s hard to argue with: “The sooner we move to a world where scientific results and other forms of scholarly publication are freely available to all, instead of under the monopolistic control of a handful of exploitative, hugely profitable corporations, the better.” Even if I don’t believe that will entirely happen within my lifetime… The first comment links to the former editor’s own commentary, very much worth reading.

Miscellany! I’m pretty sure that’s the first time in 15 years I’ve used “Miscellany!” as a heading—and I’d bet there aren’t a whole bunch of appearances of that string anywhere else. Still, in keeping with the excitable theme of this roundup…

Of Predators and Public Health Kevin L. Smith published this “Peer to Peer Review” column at Library Journal on May 23, 2013. It’s about the American Journal of Public Health, a well-established subscription society journal with a reasonably high impact factor. Smith begins: Why would one decide to publish a journal on public health? It sound like a rhetorical question, but it may be more serious than we think. The obvious answer is to improve the health of the public. But if that really is the goal, a publisher in public health would need to try to reach the largest audience of the public that was possible. So a recent announcement from one prominent public health publisher casts doubt on that intent, and the purpose of the journal overall.

AJPH was already pretty retrograde in OA terms: it does not allow authors to self-archive their articles in either pre-print or post-print form. Authors only rights are to use their own work in other stuff they’re writing—after submitting a formal permission request to the society. As for access? There were two routes: $2,500 to make an article OA—or wait out a two-year embargo. Oh, and at the end of the two-year embargo the articles aren’t actually OA: linking is only permitted “for educational purposes.” But it gets worse: From these policies as they stand, we should be able to discern that profit is more important than fostering public health to the Association. But even with that context as preparation, the message they sent to authors late last month was a shock. Those authors Cites & Insights

who did not pay for immediate open access believed, at the time they published, that their articles would be fully accessible after two years. But on June 1, the APHA is changing the rules, according to an email from their Publications Editor. The window for open access will be closed much further—only articles that are ten years old or older will be open access. The American Journal of Public Health has decided that the public deserves access to only decade-old materials it has published, which is a useless gesture, given the pace of health-related progress.

Ah, but authors who think ten years is too long to wait can pay a “steeply discounted rate” of $1,000 per article to make their articles sort-of available. So it seems clear that authors who have already published with APHA over the past decade are being treated as cash cows that can be milked for additional funds—”pay up or the public loses the benefit of access to your work in public health.” And eight years of health-related information is clawed back out of public hands (except for those articles also available by federal mandate in PubMed Central). I have sometimes complained about lists of predatory open access journals because I think the criteria used are not always the right ones. But if any publishing practice can be viewed purely as an attempt to exploit open access in order to extract money from authors while offering little added benefit, this change in policy is such a practice.

Smith takes three lessons from this (explained in more detail in the column): impact factors are deceptive; we need more government requirements for access to publicly-funded research; and it makes sense to fight for short embargo periods. A handful of comments, including a bizarro one from J. Beall, who I guess must object to “predatory” being used to describe his beloved subscription journals. He says Smith should “stand back and take a broader view”; apparently because there are OA journals in public health, it’s wrong to condemn AJPH for its actions. (There’s a four-part conversation that leaves me even more confused as to why Beall thought he needed to comment, although a “(truly predatory)” parenthetical remark in the first comment is at least suggestive: that is, only OA journals can be “truly predatory.” He doesn’t say that, but it’s surely implied.)

Activism or Science? A Debate on Open Access This post by Erin McKiernan on July 24, 2013 at Erin C. McKiernan is fascinating, and I think you should read it—and the comments—in the original, but a few notes may be in order. McKiernan, a neuroscience researcher, was writing a systematic review, an attempt to give a complete

December 2015

26

overview of the literature on a certain topic and quantify some aspects of the research. To give you an idea of the scale of my review, my searches retrieved 370 results. 73 of these were duplicates and excluded, leaving me with 297 unique articles to screen. From these, I excluded 183 studies whose results were irrelevant to the questions my review seeks to answer, a further 8 which were review or comment articles, and another 7 studies which used techniques other than the ones to which I’m restricting my review. Many of the articles I retrieved and screened were published in subscription journals, but I gained access via a university affiliation. Even so, there were some articles I couldn’t retrieve.

So she tweeted that she’d excluded seven articles because she doesn’t have access. “If I can’t read your paper, I’m not citing you.” And used the hashtag #openaccess. Suddenly this was a political action and unscientific. She doesn’t agree: excluding articles you can’t legally access is a scientific issue, related to completeness. First, from a practical standpoint, I can’t extract the information I need. My review involves specifying certain methodological details of each study – details which are not available in the abstracts. Second, and more generally, I cannot evaluate the quality or validity of a study if I can’t read it in full. There are other studies which I have excluded from the review after reading the full text and finding inconsistencies in the methods, or disagreement between statements in the methods and results sections. As a scientist, I think it is irresponsible of me to cite work I haven’t properly evaluated.

There’s quite a bit more. She was accused of activism (but, in fact, 40 of the articles are paywalled: she has good institutional access) and told that it was up to her to get access one way or another—although some disagreed, saying that the burden should be on researchers to make work accessible, not on everybody else to try to access it. It was apparently a passionate discussion, and I can see why. McKiernan isn’t sure she has the answers, and planned to make further efforts to access the seven articles: So, I will make additional efforts to retrieve those articles I am missing. If I get them, great. If I don’t, then I can say my reason for excluding them was purely scientific. I could not evaluate the work. Period. And in the meantime, I’ll keep advocating for open access. Hopefully, the next time I do a systematic review access won’t be an issue.

The comments are worth reading. Is this an ethical issue? Most certainly, especially since some ways of acquiring articles are technically illegal. Cites & Insights

One takeaway for me is that scholars in smaller institutions are simply unable to consider doing systematic reviews: they just can’t get at the articles. Another should be obvious: if you’re doing anything except a systematic review, or if you’re an independent scholar, you’re far more likely to cite material you can legally obtain—that is, OA material, either gold or green. In my mind, that’s ethically appropriate.

Open-access website gets tough Richard Van Noorden published this on August 6, 2014 in Nature’s news section. It’s about the newish criteria for DOAJ inclusion (the tease is “Leading directory tightens listing criteria to weed out rogue journals”). It’s an interesting article in a number of ways. For one thing, I believe it’s the first and possibly only time my OA-related research has been cited in a Big Deal Publication (which, if anyone was so inclined, could now provide a legitimate basis for a Wikipedia article on me or on the work—just as Beall’s list is viewed as a reputable source by Wikipedia, even though it’s just a blog page, because it’s been written up in print venues). I’m not thrilled with the way the mention is worded, as it pretty much flips my conclusion on its head, but still: The DOAJ, which receives around 600,000 page views a month, according to Bjørnshauge, is already supposed to be filtered for quality. But a study by Walt Crawford, a retired library systems analyst in Livermore, California, last month (see go.nature.com/z524co) found that the DOAJ currently includes some 900 titles that are mentioned in a blacklist of 9,200 potential predatory journals compiled by librarian Jeffrey Beall at the University of Colorado Denver (see Nature 495, 433–435; 2013). In addition, journalist John Bohannon last year proved that at least 73 journals in the DOAJ were suspect; in a sting operation, he sent them an obviously flawed paper which they then accepted for publication (J. Bohannon Science 342, 60–65; 2013). The DOAJ removed the journals from its index (see ‘Stunted growth’).

I’d put it the other way: even if you accept that everything on Bohannon’s list is predatory—which I don’t—then less than 10% of the journals and “journals” in those lists were in DOAJ even under the old criteria. (Note that “some 900” is already down to “some 776,” since MDPI is no longer on the list.) Then there’s this: It is not clear whether the DOAJ’s whitelist will become the pre-eminent index of trustworthy open-access journals. Beall says that the directory’s credibility has already been hurt and that its new approach is “too little, too late”.

December 2015

27

That’s in line with Beall’s previous trashing of DOAJ. Of course, the new DOAJ effort involves more than two dozen editors (mostly librarians and PhD candidates) rather than one clearly anti-OA librarian; if credibility is the issue, I know which side I’d be on. Later, Beall takes another chance to dismiss whitelists, although his argument could be equally taken as a sign that his blacklists aren’t working. I do like this, but then I would, wouldn’t I?: Bjørnshauge says that a small cohort of some 30 voluntary associate editors—mainly librarians and PhD students—will check the information submitted in reapplications with the publishers, and there will be a second layer of checks from managing editors. He also finds it “extremely questionable to run blacklists of open-access publishers”, as Beall has done. (Crawford’s study found that Beall’s apparently voluminous list includes many journals that are empty, dormant or publish fewer than 20 articles each year, suggesting that the problem is not as bad as Beall says.)

One commenter disagrees, finding that empty journals are a sign of things getting even worse.

The Subscription Publisher is Inherently AntiScience Lenny Teytelman makes this seemingly broad ethical assertion in a January 14, 2015 post at The Spectroscope. He begins with a brief note about the “chatter over the past year about predatory open access publishers”—and includes a link to another essay that you may find interesting. But that’s not the theme here. In addition to the many moral and practical arguments for open access, it turns out that the paywall itself creates a stunningly perverse anti-science system.

His basis for this assertion is primarily the work he’s been doing to create protocols.io, a “free, open access, crowdsourced protocols repository.” What’s a protocol? I’m not a scientist, but so I’ll quote Wikipedia’s definition: “a predefined written procedural method in the design and implementation of experiments.” You need stated protocols for replication to be possible—and you’d expect protocols to be inherently as open as possible. You’d be wrong. Throughout the past two years of work on protocols.io, I have talked to many directors and CEOs of biomedical publishing companies. Since we are building an open, up-to-date, central protocol repository, it is important for us to seed protocols.io with the commonlyused methods. Ideally, we want a hundred comments and forks on a single trusted Western Blot protocol instead of 100 independently-entered variants. We need a tree with branches instead of a forest, so that we can Cites & Insights

visualize the changes and optimizations. Therefore, I reach out to the publishers offering them: Let’s put only the steps of the protocols and link back to the original publication, as we do with all third-party content on protocols.io right now. The benefits to you [the publisher] would include: 1.

Each method is instantly “runnable” on the web/mobile with all changes recording to a cloud-synchronized notebook.

2.

There will be a separate section for Your Journal on our site and it will drive additional traffic to you for further information.

3.

We will share free of charge all corrections and improvements of your methods on a monthly or quarterly basis.

The reply is always “no thank you.” The reason— publishers are afraid to lose subscription revenue. They worry that making even just the method steps publicly available in open access format will lead to some libraries dropping subscriptions to their journals. The incredible part of this is that the value to their reader - the improved science method - is dismissed entirely because it is not directly tied to the subscription revenue.

Read the first sentence in that last paragraph again: The reply is always “no thank you.” I haven’t talked to all publishers, but after a year of these conversations with many, it’s clear to me that they will not partner. It’s amazing because most of these directors are scientists and they know just how badly our research enterprise needs this central protocol repository. One of them even said to me, “Lenny, I admire what you are trying to do as a scientist. But we have invested so much into publishing these methods, we can’t just give them away to you.”

The publisher didn’t pay to create the protocols. It didn’t pay to write them up. They’re (probably) not copyrightable—they’re like recipes in that regard. Note that he’s not asking for permission to republish the articles: just the protocols. Is refusal to allow that ethically questionable? I know my answer.

In which the DOAJ is clever A short and focused piece by the Library Loon on August 18, 2015 at Gavia Libraria—focused, that is, on one of the rare cases in which a journal blacklist pretty clearly works and seems uncontroversial to me. The case? When a journal is explicitly lying. In this case, lying about being listed in DOAJ. This strikes me as particularly stupid lie because it’s so easy to check. I mean, it’s not difficult to search for a journal title in DOAJ. But there’s an even easier way: DOAJ maintains a list of journals that falsely claim inclusion.

December 2015

28

Claiming you’re in DOAJ when you’re not falls into the same category as claiming that your journal is indexed by Ulrich’s or that you do rigorous peer review with a one-day turnaround or that your OA journal that began in 2014 is the world’s leading authority on X (unless X is defined in staggeringly narrow terms). It’s a flat-out lie, and no author or reader should regard the journal as anything other than a bad joke, certainly not as a serious journal. The Loon’s take on DOAJ’s list: Imprimis, it takes a little wind out of the sails of a certain other list of increasingly and deservedly ill repute. The single criterion for inclusion on DOAJ’s list is admirably transparent and simple to understand, as well as unquestionably mendacious. Secundus—and the Loon is shaking her beaked head in shame for not having guessed at this—it neatly takes advantage of the well-known proclivity of scammy journals to slather as many official-looking names and logos as possible over their web presences, even when they are irrelevant or (as in DOAJ’s case) claim a false association. In other words, DOAJ’s criteria hardly had to be watertight in order to lure scamsters into its trap. DOAJ simply had to create a desirable mark, wait for scamsters to lie about it (as DOAJ could perfectly well predict they would do), and call them on it.

No argument here.

When ‘exciting’ trumps ‘honest’, traditional academic journals encourage bad science Robert de Vries goes into some detail about this in an August 4, 2014 piece at The Conversation—and it’s directly related to publication practices of scholarly journals but moves part of the ethical issues all the way back to the scientists. It’s tempting to quote the whole piece (it’s quite good), but I won’t for two reasons: It’s long…and the CC license is BY-ND, and I’d guess that this larger essay constitutes a “derivative.” But never mind… de Vries says that (almost) any scientific study includes large numbers of decisions on definitions, methods, etc. “On any given project a scientist will probably end up trying many different permutations, generating masses and masses of data.” The problem is that in the final published paper— the only thing you or I ever get to read—you are likely to see only one result: the one the researchers were looking for. This is because, in my experience, scientists often leave complicating information out of published papers, especially if it conflicts with the overall message they are trying to get across. Cites & Insights

“Leave complicating information out” is another word for cherry-picking, or at least it can be: it tends to make the results more dramatic and less, well, truthful. Is it a rare problem? Not according to a metaanalysis of surveys of scientists regarding research misconduct (the link is in de Vries’ article and I did not read all of the linked article): [A]round a third of scientists (33.7%) admitted to things like dropping data points based on a “gut feeling” or selectively reporting results that “worked” (that showed what their theories predicted). About 70% said they had seen their colleagues doing this. If this is what they are prepared to admit to a stranger researching the issue, the real numbers are probably much, much higher.

Is this a serious problem? de Vries thinks so: It is almost impossible to overstate how big a problem this is for science. It means that, looking at a given paper, you have almost no idea of how much the results genuinely reflect reality (hint: probably not much).

That link goes to another article which, like the earlier one, is in a PLOS journal: “Why Most Published Research Findings Are False.” As far as de Vries is concerned, the problem isn’t sketchy scientists—it’s the way research is published: “Specifically the pressure all scientists are under to be interesting.” Especially if you’re aiming for the biggest-name journals (which, perhaps not surprisingly, have some of the worst retraction rates). There’s a lot more here, including the difficulty of publishing either negative results or replications or, for that matter, ambiguous results. There’s pressure to “clean up” results and yield clear, interesting outcomes—even if those outcomes don’t reflect all of the data and may not reflect reality. We are making huge, life-altering decisions on the basis of bad information. All because we have created a system which treats scientists like journalists; which tells them to give us what is interesting instead of what is true.

One answer is the PLOS One approach: review for scientific validity, not for importance. There may be other answers, but all of them seem to imply moving away from high-rejection high-”importance” approaches to peer review.

Humor‽ What’s actually funny depends largely on who you are; thus, the interrobang ‽ (Alt+8253, if you’re wondering, and hat-tip to Wikipedia) instead of a plain old exclamation point.

December 2015

29

Manuscript decision? Please pick a number Short but sweet, by Edward A. Ross in the June 18, 2005 issue of…The Lancet? Yep. Choose a number from one to eleven. Then go to the link. I chose #7 before going to the article. No, I’m not going to quote it. Not any of it. Go read it. But first choose a number.

Does irony have a place in science? “amarcus41” asked that question on December 23, 2014 at Retraction Watch. Take us at our word when we tell you this isn’t some exercise in meta-irony or meta-criticism or any other meta-bullshit, but a pair of researchers at Drexel University in Philadelphia have published a paper calling for an end to irony in science. First, some background: In 2001, an Israeli researcher named Leonard Leibovici wrote a letter to the famously lighthearted Christmas issue of the British Medical Journal describing a randomized controlled trial in which intercessory prayer at a distance — in other words, people praying for other, sick people — was found to improve the health of patients with bloodstream infections. All the more remarkable was that this prayer was “retroactive,” as in, it purportedly occurred years after those sick patients had either left the hospital or died. The article clearly was a gag (it ran under the heading “Beyond Science”), as Leibovici later admitted— but not before other researchers cited his work, many favorably.

Say what? Scientists took seriously a paper claiming retroactive effects from prayer? Sure they did… Well, in a way, two of them did. Maybe. Two people in Drexel’s Department of Culture and Communications wrote a piece for Science and Engineering Ethics, “The Ethics of Ironic Science in Its Search for Spoof.” I’ve provided the link, but I’ve only read the abstract…because there’s no way I’m paying $39.95 to read the paper itself. Here’s the last sentence of the abstract: We conclude that publishing ironic science in a research journal can lead to the same troubles posed by retracted research, and we recommend relevant changes to publication guidelines.

Those guidelines presumably being “No humor here. Not ever. Nope. We’re always serious.” The Retraction Watch author presumably had access to the journal, since it’s clear that they read it. The article claims that 15 other papers cited Leibovici’s article at face value, along with one letter criticizing him for not obtaining informed consent and saying “Ethical issues should not be limited to linear time.” Cites & Insights

At this point, we found ourselves wondering if Ronagh and Souder were operating on an even higher level of irony, but they pulled us out of the rabbit hole with the following admission: It’s worrisome to speculate whether the commentary in BMJ is itself ironic. If so, we fall into a hopeless morass of uncertainty about all content in BMJ, the research corpus, and even this analysis of ironic science itself, a condition Booth (1974) calls unstable irony and a threat to epistemology generally. Worrisome, indeed! Which is why Ronagh and Souder end their manifesto with a call to arms. Although they admit that Leibovici was trying to point out a weak spot in science and science publishing: to wit, not everything that can be studied can be studied validly. But, they write, his instrument was too dull, or maybe too sharp: This lesson may be important but its cost is dear, for the means of delivery — irony — has enlightened some but misled others. For this reason the blemish on the scientific record left by Leibovici’s paper must be expunged. Though it will be disappointing for the reader and tedious for the author to issue a retraction (much like the anticlimax that results from explaining a joke), Leibovici’s community will not be secure in trusting the research record otherwise. We don’t profess to know much about irony (though we have been accused of flinging snark, which is a base form of the art, we suppose), but Ronagh and Souder then stick a toe into waters with which we are a bit more familiar: Based on the this study, we worry that the integrity of the scientific record is as vulnerable to ironic science as it is to retracted research. … Readers of scientific research will not expect a published paper to express certainty about the truth of its content, but they should expect certainty about the truthful intent of its authors.

Really? The Retraction Watch author says it’s simply not true that science is more vulnerable to irony than to retraction: there are a handful of spoofs (as spoofs, that is) as compared to 500 to 600 retractions per year. And, they note, even the number of retractions is tiny compared to “the 1.4 million-odd published papers each year.” (Interesting: most estimates put that number about one million papers higher; at 1.4 million, gold OA would be one-third of the total, and I’ve heard no such claims.) But the larger point, that researchers should be able accept what they read at face value and without any skepticism or intellectual effort is akin to saying that you should drink from any bottle you come across because, well, it’s liquid in a bottle, by golly!

December 2015

30

Sorry, but if some scientists are too gullible to detect an obvious spoof/hoax, they should probably find a new profession.

enter the market and advance the cause of science and scholarship. So, I thought, time to stop being a librarian, sitting around thinking deep thoughts about how the scholarly communications ecosystem should work and take the plunge! Time to become a Man of Action! Time to make some money!

The author did check with Souder to see if the “no irony in science!” paper might actually be a hoax, and was reassured that “Our paper is quite sincere.” The author ends with a few words from Leibovici, and I think they’re excellent words, worth reading in the original. The article includes links to two other discussions of this whole situation. Neuroskeptic quotes something from the Ronagh/Souder article that I find somewhat astonishing: “the blemish on the scientific record left by Leibovici’s paper must be expunged.” Because, you know, all that dead-serious research on the effects of intercessionary prayer at a distance might be taken less seriously? (My gloss, not theirs.) Just so it’s clear: Leibovici did in fact carry out the experiment described and did report on it accurately, but he thought the whole thing was ludicrous. Maybe he actually “proved” something different: intercessionary prayer works, even retroactively, if the person doing the praying doesn’t believe in it. Or not. The British Medical Journal still does its special Christmas issue full of off-kilter papers; in fact, the Leibovici paper appeared in 2001. It’s certainly the case that other joke papers have been cited seriously. The second linked piece—by Rose Eveleth at The Atlantic—seems to be saying that BMJ ought not to do this and that Ronagh/Souder are right. Or maybe I’m missing Eveleth’s humor? No, probably not.

The announcement thus constitutes a call for papers for IJUST-CANT. The APC is a modest $500 per article; there’s a surprisingly clear and full set of journal information; and there’s an exdistinguished editorial board. (Full disclosure: I’m listed on the editorial board, but my name is spelled correctly.) Do read the whole piece, including comments on the second round of journals to be launched daily starting May 2015. It’s quite a list. One suggestion, John: change the publisher’s name to Dupuis Science, Computing, Arts & Medicine: the acronym’s the same and you won’t lose out on the oh-so-lucrative field of humanities OA.

My new job: Owner and publisher of the International Journal of Usability, Systems and Technology

On Rejecting Journals: A soft boycott of closedaccess journals may be a more effective way to realign resources.

John Dupuis made this announcement on April 1, 2015 at Confessions of a Science Librarian, noting that he’s “taking a leap back into the scholarly publishing world” with the founding of a new OA publisher, Dupuis Science & Computing and Medicine! DSCAM’s first journal is named in the title; the first section in the journal will be on Concurrent Algorithms and Network Topology. But let’s let the owner have a say: It’s been a long and strange journey to this point, but I think it’s the right time. The production of scholarship is exploding, with more and more articles published every year in an ever increasing number of scholarly journals. But so much of what is being published is locked behind the rapacious paywalls of predatory commercial and society publishers. Time to liberate the articles! The growth of new business models has allowed pretty well anyone with an entrepreneurial bent to Cites & Insights

The name of my new journal is representative of where scholarship in computer science is headed—open access, international in scope and focused on how real people interact with systems and technology.

Peer Review! There’s so much to say about peer review that I’ve split it into three areas, based on—well, intercessionary prayer sounds as good as any. I think of these as “alternatives to peer review and whether OA peer review works effectively.”

Paul Kirby posted this on August 14, 2013 on LSE’s Impact of Social Sciences blog. The lede: Yesterday, in an act of minimal defiance, I declined a request for peer review on the grounds that the journal was owned by Taylor and Francis, and therefore charges authors £1,788 per piece for open access, or imposes an 18 month restriction on repository versions. In the wake of the OA debate, this situation seems increasingly ludicrous: for the short term at least, an increase in journal profit streams, made possible by the sanctity of unpaid academic input. The principle (saying no to closed journal peer review) is not inviolable, but a reluctance to subsidise shareholders with free labour seemed an appropriate response to the current balance of forces.

Kirby notes that this rejection could be considered hypocritical if he submits articles to traditional journals. There is, of course, a solid response—but he’s not ready to make that response.

December 2015

31

The most forceful of open access advocates would point out at this stage that the answer to this dilemma is pretty straightforward: don’t review for closes access journals and don’t publish in them. Simply move your labour—writing, reviewing, editorial board-ing—as quickly as possible to the more open journals. The more of us who do that, the quicker the transition to proper open access will be. This is true, but it won’t quite do. For two reasons.

Briefly, the two reasons are that OA journals in the humanities and social sciences aren’t prestigious enough and the second reason is that “journals are not just empty vessels, and are not interchangeable in content, editorial policy or audience,” but beyond that I don’t quite get how this argues against OA, except in the same way the first does: In essence, “this is the way things are. OA’s a great idea, but…” Expanding on the second point, it has much to do with audience, and there’s a sound way of assuring that nothing will ever change: You choose your venue based on the audience you plan to reach. Reading the rest of the post, I’d like to say I’m impressed—but I’m not. His “soft boycott” is to submit to subscription journals because that gets him the best rewards, and to deny them his free reviewing labor because…I’m afraid “strategic hypocrisy” as a phrase comes off to me as having an irrelevant first word.

Why open peer review does not fix the bad-journal problem This post by the Library Loon on October 11, 2013 at Gavia Libraria cuts across several areas and perhaps belonged in an earlier roundup. The title describes the discussion and harks back to earlier discussions about “bad journals” where it was suggested by commenters that open peer review could solve this problem. The Loon posits universal open review as a mental experiment—and doesn’t think it would do what she views as the needed job. Why does this not suffice? Because it does not offer the quickly-applicable sort-by-quality criteria that many participants in academe and users of its products need so badly that where there is a dearth of reasonably reliable criteria (as now, of course), they rely on manifestly unreliable and unfit-for-purpose ones.

open review well enough to fool the lay gaze is trivial (Yelp and Amazon may serve as exemplars here), the simplest assessment algorithm for a single article that anyone has offered runs something like this: 1.

Does the article have no reviews? It must be bad or unimportant, then. (Just this stops the Loon dead in her webfooted tracks, knowing how little incentive there is to review openly or otherwise, but she sees no alternative.)

2.

Otherwise, for each review, check the review author’s credentials, and verify (somehow) that the putative author did in fact write the review.

3.

Assess the trustworthiness of the reviews based on what is known about their authors. (Remember, we cannot assume that our assessor understands the field or research methods well enough to evaluate either the article or the review directly.)

4.

Based on that, form an educated (?) guess about the article’s value and reliability.

Hard to argue with the Loon’s conclusion that this strategy is simply not feasible for most unobsessed people in the real world. She notes some other things open review wouldn’t do, mostly having to do with the negative effects of “bad journals.” If neither the apparent fact of peer review nor the apparent fact of open review (“apparent” because as discussed, claiming certain review processes is not the same as doing them, or doing them responsibly) suffices as a quick quality measure, we have two non-exclusive choices: find another such measure, or winnow the pool of journals to exclude as many bad actors as possible. The Loon believes both choices have merit—she is certainly interested in alternative metrics as quickjudgment facilitators—but neither suffices in the absence of the other. She will therefore continue to ponder viable means of eliminating bad actors, and encourages her readers to do the same.

I’m not sure what to say about this post, and I may not know enough to comment intelligently. When you read it, do read the comments as well—there aren’t many and they’re all cogent.

The Vacuum Shouts Back: Postpublication Peer Review on Social Media

She notes some of the users who need to winnow out the good (trustworthy?) articles, from tenure committees and grand reviewers to layfolk looking for worthwhile information.

Zen Faulkes posted this “NeuroView” piece April 16, 2014 at Neuron—yes, an Elsevier publication. (I had to download the PDF to read the full three-page piece; the online version stops about halfway through. But the download was free.)

Many of these are not subject experts. None of them can spend hours on answering the question “is this trustworthy?” one article at a time. Given that faking

Social media has created new pathways for postpublication peer review, which regularly leads to corrections. Such online discussions are often resisted

Cites & Insights

December 2015

32

by authors and editors, however, and efforts to formalize postpublication peer review have not yet resonated with scientific communities.

Faulkes thinks postpublication peer review is a good thing, but not in lieu of prepublication peer review. That’s an interesting discussion, one I won’t get into here (you can read Faulkes’ piece). The parenthetical advice is probably about all I can say here. I wonder to what extent peer review via social media (Facebook, Twitter, blogs, etc.) will be findable and fully effective, but I may be overthinking. It is, of course, happening already.

Peer Review as a Service: It’s not about the journal Here’s an oddity. It’s by Chris Lintott, Stuart Lynn, Robert Simpson and Arfon Smith; it was published on May 9, 2014; it describes a sort of overlay journal approach—where articles are posted in arXiv, opened for comments and resolution of comments in a GitHub repository; and “accepted” once enough issues are resolved. I would say “it was published at theoj” but that’s not quite right: The post is theoj.org (The Open Journal as a web title). Period. It includes a variety of links, including two codebases for people who might want to complete the concept and one to a GitHub repository for, I guess, the thing itself. This may be brilliant. If so, I’m (once again) the wrong person to comment. I do have to say that the proposed first journal, Open Journal of Astrophysics, suffers slightly because “Open Journal” and “Open” as leading portions of a journal title have both been associated with publishers that are, in one case, claimed to be unsavory and, in another, pretty much known to be unsavory.

Open Access Works are as Reliable as Other Publishing Models at Retracting Flawed Articles from the Biomedical Literature That’s the title (and key conclusion) of Elizabeth Stovold’s July 23, 2014 review, in Evidence Based Library and Information Practices, of a G.M. Peterson article (in a paywalled journal). The article being reviewed looked at 160 retracted papers (in PubMed Central) published in the 2000s. The review is short enough that it makes sense to read the original (EBLIP is one of several excellent OA journals in librarianship). The conclusion in the original paper: Open access literature is similar in its rate of retraction and the reduction in post-retraction citations to the rest of the biomedical literature, and is actually more reliable at reporting the reason for the retraction. Cites & Insights

As Stovold notes in her commentary, the study’s findings are “reassuring for librarians and searchers who may be recommending open access journals to researchers and practitioners in the biomedical field as sources of reliable evidence.”

The Winnower: a “radical” publishing platform that encourages debate. Interview with Josh Nicholson To close this odd set of pieces about alternatives to peer review and that sort of thing, here’s an interview done by Brian Mathews and published July 9, 2015 in The Ubiquitous Librarian. It’s about The Winnower—which is sort of a postpublication peer review supermegajournal, more or less. It’s OA—but with author-side fees ($25 or less) if you want a submission to be turned into an archived article. If that seems confusing, so—in some ways—is the whole thing. Josh Nicholson was unhappy about the state of scholarly publishing. The Winnower is his solution. It’s one big site to rule them all, a handsome “journal” composed of many broad topics with even more subtopics, each leading to some number (from zero on up) of pieces. I won’t exactly say “articles,” because “pieces of work” (Nicholson’s term) seems to be more appropriate—i.e., “conference proceedings, grants, open letters, responses to grants, peer reviews, logistics for organizing symposiums, and more.” Nicholson says: The Winnower was founded based around this idea of identifying good/bad work openly via post-publication peer review. This shift in the publication model has many benefits to the current system and I think it is only a matter of time before most publishers follow it. Since review happens after publication it means ideas and work can be communicated immediately as opposed to the months/years it takes at traditional publishers. Since work is guaranteed to be published it encourages authors to be right as opposed to simply “passing peer review.” What I mean is there is no real need to take shortcuts to get past peer review because peer review is no longer a “you should do these experiments etc etc” before publication.

Is appearance in The Winnower really “publication”? Given that the decision to archive something—make it official, with a DOI—appears to be entirely up to the author, does this do much in the way of winnowing? It’s hard for me to tell. I looked at some of the fields and topics. Within the dozen topics grouped as Engineering, most were empty, some had one or two articles, and I found no articles with any reviews; there appear to be a total of

December 2015

33

four articles in all, from two authors. Humanities, with seven specific topics, shows a total of five articles, two of which have one review each—except that one review is a link to a current version of the article and the other corrects a one-word error in the article’s abstract. Looking further, I see that Medicine has twenty topics within it—and while there are more pieces (50 in all, I think), attempting to sort by “most reviews” seems to suggest that none have been reviewed, but that turns out to be wrong: the controls just don’t work. Is this a workable model? Perhaps. Does it make sense to have one “journal” with even broader scope than PLOS One? I’m less certain. It is, at the very least, interesting. The site provides templates for the most likely ways to submit material, and the presentation is consistently attractive. But the site doesn’t seem to be gaining much traction, especially as a working demonstration of post-publication peer review; in effect, it doesn’t seem to be winnowing much. It may, of course, be too early to tell, although the site’s actually been operating since May 27, 2014. (Interesting: at the time of founding, the “nominal fee” for each publication was stated as $100; now, it’s $25.)

Problematic Peer Review! A somewhat larger group of items that seemed to relate to problematic peer review—as opposed to defective peer review, the theme of the next group. The difference should be utterly clear to any literate reader. Or not.

Scientists ‘bad at judging peers’ published work,’ says new study This press release from PLOS appeared October 8, 2013 at EurekAlert! Are scientists any good at judging the importance of the scientific work of others? According to a study published 8 October in the open access journal PLOS Biology (with an accompanying editorial), scientists are unreliable judges of the importance of fellow researchers’ published papers. The article’s lead author, Professor Adam EyreWalker of the University of Sussex, says: “Scientists are probably the best judges of science, but they are pretty bad at it.” Prof. Eyre-Walker and Dr Nina Stoletzki studied three methods of assessing published scientific papers, using two sets of peer-reviewed articles. The three assessment methods the researchers looked at were: •

Peer review: subjective post-publication peer review where other scientists give their opinion of a published work;

Cites & Insights



Number of citations: the number of times a paper is referenced as a recognised source of information in another publication;



Impact factor: a measure of a journal’s importance, determined by the average number of times papers in a journal are cited by other scientific papers.

The findings, say the authors, show that scientists are unreliable judges of the importance of a scientific publication: they rarely agree on the importance of a particular paper and are strongly influenced by where the paper is published, over-rating science published in high-profile scientific journals. Furthermore, the authors show that the number of times a paper is subsequently referred to by other scientists bears little relation to the underlying merit of the science. As Eyre-Walker puts it: “The three measures of scientific merit considered here are poor; in particular subjective assessments are an error-prone, biased and expensive method by which to assess merit. While the impact factor may be the most satisfactory of the methods considered, since it is a form of prepublication review, it is likely to be a poor measure of merit, since it depends on subjective assessment.”

Here’s a link to the full article, Eyre-Walker A, Stoletzki N (2013) The Assessment of Science: The Relative Merits of Post-Publication Review, the Impact Factor, and the Number of Citations. PLoS Biol 11(10): e1001675. doi:10.1371/journal.pbio.1001675 What’s being discussed here is not peer review as in “is this article good scholarship?” but importance estimation as in “is this article a Big Deal?” Whatever PLOS One’s good and bad points, at least it’s one journal that’s split off the two functions.

Towards five stars of transparent pre-publication peer review This one’s from Wouter Gerritsma on December 24, 2013 at WoW! Wouter on the Web. He takes as his starting point a set of “Principles of transparency and best practice in scholarly publishing,” specifically the discussion of peer review. “All of a journal’s content, apart from any editorial material that is clearly marked as such, shall be subjected to peer review. Peer review is defined as obtaining advice on individual manuscripts from reviewers expert in the field who are not part of the journal’s editorial staff. This process, as well as any policies related to the journal’s peer review procedures, shall be clearly described on the journal’s Web site.”

Wowter doesn’t think this statement is enough; he proposes a “five star system for transparency of the peer review process”:

December 2015

34

1 *: Providing clear dates of submission, revision, acceptance and publication 2 **: Listing the reviewers involved once a year 3 ***: Providing a yearly overview of submissions and acceptance 4 ****: Naming the handling editors and reviewers per article 5 *****: Publishing the review reports online alongside the final article

The rest of the post expands on the five stars. It’s an interesting discussion, and it would certainly seem that applying all five—the last one probably being the biggest change for most journals—would improve the transparency and possibly the quality of peer review. (I see #3 being done on a real-time basis by quite a few journals; it seems to be part of Hindawi’s template, for example.)

Is it time for a journal Review Quality Index? Ivan Oransky posted this on June 13, 2014 at Retraction Watch, pointing to a paper in Trends in Ecology & Evolution. (I don’t link to paywalled articles, and of course can’t read them either.) Oransky refers to a different proposed “transparency index” that’s both broader and narrower than the one just mentioned, and agrees with the paywalled paper’s assertion that “the process of manuscript reviewing needs to be evaluated and improved by the scientific publishing community.” I have to admit to some puzzlement on one thing mentioned, a “peer review evaluation” that appears to add another priced service that, well, says that something called peer review has actually taken place. As a naïve layman, I don’t see how this improves the situation. The article’s proposals seem more stringent and, in one way or another, consist of reviewing the reviewers or the reviews. Maybe you’ll find them interesting. The fourth one seems to me to have some potential, but I’m not sure how much: “Finally, journals might provide the details of articles submitted, but rejected, to an external web-based repository. This external repository could then track the fate of rejected manuscripts, and monitor whether they are subsequently published by another journal. Using the citation of these initially rejected versus accepted manuscripts, it would be possible to compare the reliability of decision-making process. Based on these data, a Review Quality Index might be calculated.”

I’d comment first that the rejected manuscripts will almost certainly be published elsewhere if the authors are determined to see that happen—and that, without doing textual comparisons, it would be extremely Cites & Insights

difficult to determine whether the published paper was improved through multiple peer reviews. The comments are interesting.

Three myths about scientific peer review An interesting post by Michael Nielsen—and, oddly enough, this one goes way back, to January 8, 2009 on Nielsen’s blog. (I just hadn’t picked up on it until July 29, 2014.) It’s a long and interesting post, worth reading in the original (if you haven’t already). I’ll quote the myths themselves, but not the discussion: 1: Scientists have always used peer review 2: Peer review is reliable 3: Peer review is the way we determine what’s right and wrong in science

There’s some really good stuff in the discussion (did you know that, apparently, only one of Albert Einstein’s 300odd papers was peer reviewed—and that he objected to such review?). I’ll quote the final paragraph: In practice, of course, nearly all scientists understand that peer review is only part of a much more complex process by which we evaluate and refine scientific knowledge, gradually coming to (provisionally) accept some findings as well established, and discarding the rest. So, in that sense, this third myth isn’t one that’s widely believed within the scientific community. But in many scientists’ shorthand accounts of how science progresses, peer review is given a falsely exaggerated role, and this is reflected in the understanding many people in the general public have of how science works. Many times I’ve had non-scientists mention to me that a paper has been “peer-reviewed!”, as though that somehow establishes that it is correct, or high quality. I’ve encountered this, for example, in some very good journalists, and it’s a concern, for peer review is only a small part of a much more complex and much more reliable system by which we determine what scientific discoveries are worth taking further, and what should be discarded.

A long stream of comments, worth reading.

What happens when a very good political science journal checks the statistical code of its submissions? This relatively short piece by Chris Blattman on December 12, 2014 at his eponymous blog is mostly select quotations from another post—by Nick Eubank on December 9, 2014 at The Political Methodologist. The Quarterly Journal of Political Science does inhouse review of statistical code related to articles, an “expensive and hard to do” task.

December 2015

Of the 24 empirical papers subject to in-house replication review since September 2012, only 4 packages required no modifications. 35

Of the remaining 20 papers, 13 had code that would not execute without errors, 8 failed to include code for results that appeared in the paper, and 7 failed to include installation directions for software dependencies. 13 (54 percent) had results in the paper that differed from those generated by the author’s own code. Some of these issues were relatively small — likely arising from rounding errors during transcription — but in other cases they involved incorrectly signed or mis-labeled regression coefficients, large errors in observation counts, and incorrect summary statistics. You might think to yourself, “that would never happen in the very top journals”, or “that’s less likely among statisticians and economists. While I think expertise might reduce errors, I’m not so sure. More senior people with more publications tend to rely more on research assistants. And, speaking from personal experience, I’ve found major problems in important people’s code before.

So roughly 92% of the papers checked had problems in the statistical code. Would the results be better in other social sciences? I suspect my own opinion is colored by my belief that some (many?) social scientists use statistical methods a little too loosely. Would they be better in the hard sciences? For the sake of any replicability, one can only hope… Note: This is not at all related to OA. The journal is a subscription journal.

The NIPS Experiment As with the previous item, including this Eric Price piece (published December 15, 2014 at Moody Rd) may be a stretch for a roundup on ethics and access, but it’s relevant to a discussion of peer-review problems. It’s a fairly detailed report on an interesting experiment related more, I think, to the “is this important enough?” aspect of (some) peer review— although in this case it’s for conference papers rather than journal articles, and there presumably is an actual limit on how many papers can be presented during a conference, unlike the nonexistent limits on how many articles an online journal can “fit.” The experiment: for the 2014 NIPS Conference (an important machine learning conference), the organizers split the program committee (which decides which proposed papers will be accepted) down the middle. Most papers were assigned to one of the two panels—but 166 papers were assigned to both. In both cases, the panels needed to choose about 22.5% of the submitted papers. Cites & Insights

The results were revealed this week: of the 166 papers, the two committees disagreed on the fates of 25.9% of them: 43. [Update: the original post said 42 here, but I misremembered.] But this “25%” number is misleading, and most people I’ve talked to have misunderstood it: it actually means that the two committees disagreed more than they agreed on which papers to accept. Let me explain. The two committees were each tasked with a 22.5% acceptance rate. This would mean choosing about 37 or 38 of the 166 papers to accept1. Since they disagreed on 43 papers total, this means one committee accepted 21 papers that the other committee rejected and the other committee accepted 22 papers the first rejected, for 21 + 22 = 43 total papers with different outcomes. Since they accepted 37 or 38 papers, this means they disagreed on 21/37 or 22/38 ≈ 57% of the list of accepted papers. In particular, about 57% of the papers accepted by the first committee were rejected by the second one and vice versa. In other words, most papers at NIPS would be rejected if one reran the conference review process (with a 95% confidence interval of 40-75%)

I can’t fault Price’s logic. The rest of the discussion has to do with possible models for paper acceptance that would be consistent with these findings: the “coin flip” model, the “messy middle” model and the “noisy scoring” model, which is to some extent a variation on the “messy middle.” You’ll have to read the article for more details. I’m guessing the results would be comparable for selective journals, OA or otherwise. I’ll quote the conclusions—and you can consider whether you could substitute another field for “computer science” or “journal” for “conferences”: Computer science conference acceptances seem to be more random than we had previously realized. This suggests that we should rethink the importance we give to them in terms of the job search, tenure process, etc. I’ll close with a few final thoughts: •

Consistency is not the only goal. Double-blind reviewing probably decreases consistency by decreasing the bias towards established researchers, but this is a good thing and the TCS conferences should adopt the system.



Experiments are good! As scientists, we ought to do more experiments on our processes. The grad school admissions process seems like a good target for this, for example.



I’d like to give a huge shout-out to the NIPS organizers, Corinna Cortes and Neil Lawrence, for running this experiment. It wasn’t an easy task –

December 2015

36

not only did they review 10% more papers than necessary, they also had the overhead of finding and running two independent PCs. But the results are valuable for the whole computer science community.

Lots of comments, mostly requiring more understanding of the field than I have.

Is peer review just a crapshoot? Here’s an odd one (not directly related to OA), from Richard D. Primack, Ahima Campos Arceiz and Lian Pin Koh, posted March 31, 2015 at Elsevier Connect. It reports on a study of 4,575 manuscripts submitted to Biological Conservation (a “hybrid” journal) and 2,093 of those papers that were sent out for review. The study also appeared in Biological Conservation, and I won’t link to it because the free-access period has now closed and I wasn’t willing to pay $35.95 to read it. (Primack is editor-in-chief of Biological Conservation.) The tease for this post: How do reviewer recommendations influence editor decisions? And are Chinese authors treated fairly?

The answer to the first is “a lot,” as you would hope: that is, if two reviewers recommend accepting with minor revisions, there’s a 98% chance that the paper will be published, where if there are two “reject” recommendations or one reject and one “major revision,” there’s a less than 5% chance of publication. (As to the more extreme cases: if one reviewer recommended accepting as is and the other either wanted minor revisions or also recommended acceptance, the papers were all accepted—and if both reviewers recommended rejection, they were all rejected.) But first the papers have to get sent to reviewers—and 71% of papers by Chinese authors were rejected without review, compared to only 50% of papers by authors from countries where the primary language is English (my take on “English-speaking countries,” since there are certainly tens of millions of people in China who speak English at some level). So far, so good—”editors pay attention to reviews.” But does the paragraph above suggest a problem or bias? Not according to the authors, including the editor-in-chief and another editor: Although identifying the causes of the higher rejection rate was beyond the scope of our paper, the reasons are probably due to a combination of the topics being less relevant to the journal, lower quality of research work, and less effective presentation of the material.

So: Chinese authors are treated fairly by the editors because the editors say they are treated fairly. Cites & Insights

It’s also true that Chinese papers didn’t do as well as “papers from English-speaking countries” when they did go out for review, but the margin is much smaller: 42% of Chinese papers that were reviewed were accepted, compared to 51% of English-speaking-countries papers. Overall, papers from China had slightly less than half the chance (12%) of passing both hurdles of papers from English-speaking countries (25%). How consistent are reviewers? The significant agreement between pairs of reviewers who are reviewing the same paper (as reflected by significant intra-class correlations) for papers from both China and from English-speaking countries indicates that reviewers are able to determine the quality of the paper using objective and repeatable criteria. However, the fact that these correlations are much lower than 1.00 also indicates that most of the time the two reviewers of the paper are coming up with different opinions on the quality of the paper.

So: most of the time reviewers come to different conclusions—but reviewers use “objective and repeatable criteria.” An interesting sidenote: authors from China sometimes request reviewers from China—but Chinese reviewers were harder on Chinese papers than were reviewers from other countries. Be careful what you wish for? The conclusions are, as I would expect, comforting for the status quo: Reviewers provide useful information, and editors make reasonable decisions Our results demonstrate that the review process is not a crapshoot; reviewers are providing useful information and editors are using this information to make reasonable decisions. Authors from a non English-speaking country such as China have a tougher time getting their work published, but this is not due to a negative bias by reviewers from English-speaking countries or editors. The next step in evaluating the review process might be to examine the usefulness of the extensive comments of the reviewers in the decision-making progress. And also, more attention needs to be devoted to supporting scientists from China and other developing countries to help them reach their potential in carrying out and publishing their research.

I will repeat that the authors decided that the causes of a higher editorial rejection rate for Chinese articles was “beyond the scope of our paper,” but note that they can confidently state that negative bias had nothing to do with it. In the comments Primack notes that he had done an earlier study that demonstrated that there was no bias against women authors or young authors.

December 2015

37

Weighing Up Anonymity and Openness in Publication Peer Review

when they knew their colors would be nailed to the public mast if the article was published.

Hilda Bastian posted this on May 13, 2015 at Absolutely Maybe. It’s a fairly detailed look at studies of peer review. There are at least two dimensions to openness in pre-publication peer review:  Blind, double-blind, or non-blind? That’s the focus here: should reviewers and authors be anonymous to each other (double-blind), only in one direction (which can either be that reviewers aren’t provided with authors’ names or viceversa, and in either case is single-blind), or nonblind (with names known in both directions?  Open reviews or closed? That is: if an article’s been accepted, should the reviewers’ comments be publicly available or not? I don’t believe that’s being discussed here, but it’s an interesting question: if your signed review comments are available to all readers, it’s likely to have both good and possibly bad effects on your reviewing and your willingness to review. This article is about the first aspect. Bastian found 17 relevant comparative studies, a dozen of which were controlled trials. Bastian calls single-blind reviewing the situation where authors don’t know who reviewed their papers; the inverse—where reviewers are named but authors are anonymous—may not exist as a realworld model. The questions the article attempts to answer by looking at the studies: 1. Did attempts to mask authors’ identities affect acceptance or rejection of manuscripts, or the scientific quality of peer review reports?

Summarizing, apparently not—partly (maybe) because the masking frequently failed. 2. Did revealing peer reviewers’ identities affect acceptance or rejection, or the scientific quality and collegiality of peer review reports?

Reading the description here, I’m confused as to whether they’re talking about making reviewers’ names available to the authors, publishing the reviews along with the articles, or not. The latter seems to yield better-substantiated reviews, if I’m reading correctly, and either one may lead to more courteous and helpful review; neither seems to yield less substantively critical reviews. But, as you might expect: There was one large effect: many peer reviewers declined the invitation to peer review when they knew there was a chance they would be named—especially Cites & Insights

This is hardly surprising if perhaps unfortunate. 3. Did masking identities of authors and/or peer reviews affect detectable bias against authors?

A long and difficult discussion. The closing paragraph: There’s not much help in this group of studies. We would need much stronger evidence to draw conclusions about the effect of author blinding on bias. Based on these studies, a policy of blinding authors could have a benefit, a detrimental effect, or no effect on gender bias.

Bastian has quite a bit to say about the situation, beginning with these paragraphs: So with that knowledge from studies, how does revealing identities stack up against anonymity and attempting to blind authorship? I think institutionalizing anonymity in publication peer review is probably going out on a limb. It’s only partially successful at hiding authors’ identities, and mostly only when people in their field don’t know what authors have been working on. If blinding authors was a powerful mechanism, that would be evident by now. Author and peer reviewer anonymity haven’t been shown to have an overall benefit, and they may cause harm. Part of the potential for harm is if journals act as though it’s a sufficiently effective mechanism to prevent bias.

But the rest is really worth reading, since there are some obvious signs of various forms of bias, at least for some journals. The comment stream is also worth reading.

Our broken peer review system, in one saga This one, by Philip N. Cohen on October 5, 2015 at Family Inequality, really is a saga. It’s a detailed discussion of how a journal article took two years from first submission to final publication, with four journals and, in all, eight review cycles involved. This story illustrates some endemic problems with our system of scholarly communication, both generally and in the discipline of sociology specifically. I discuss the problems after the story.

The article itself is about changing attitudes toward pornography—specifically that men’s tendency to not want to criminalize pornography is increasing at a faster rate than women’s. It also seems heavily antipornography, which may or may not matter. The gist of our paper is this: Opposition to pornography has declined in the U.S. since 1975, but faster for men than for women. As a result, the gender gap in opposition—with women more likely to oppose pornography—has widened.

December 2015

38

That’s the finding. Our interpretation—which is independent of the veracity of our finding—is that opposition has declined as porn became more ubiquitous, but that women have been slower to drop their opposition because at the same time mainstream porn has become more violent and degrading to women. We see all this reflecting two trends: pornographication (more things in popular culture becoming more pornographic) and post-feminism (less acceptance of speaking up against the sexist nature of popular media, including porn). We could be wrong in our interpretation, and there is no way to test it, but the empirical analysis is pretty straightforward and we should accept it as a description of the trend in attitudes toward pornography. And for doing that empirical work we beg permission to tell you our interpretation.

The research was entirely based on analyzing responses in the General Social Survey, which Cohen says asks “a large sample of U.S. adults” a series of questions. In practice, GSS figures seem to be based on around 1,400 adults willing to deal with a 90-minute interview—and the question in this case is interesting: Which of these statements comes closest to your feelings about pornography laws: 1. There should be laws against the distribution of pornography whatever the age. 2. There should be laws against the distribution of pornography to persons under 18. 3. There should be no laws forbidding the distribution of pornography.

It’s not asking about “opposition to pornography”: it’s asking about legality of pornography: that is, for respondents aware of the Constitution and legal precedent, the willingness to amend the Constitution so as to abridge the First Amendment. There are almost certainly many people who are opposed to pornography (that is, who don’t like it and would prefer to see less of it) but who would not favor a drastic legal change; I count myself among them. (The authors treated responses 1 and 2 as identical.) Never mind: that’s a different level of argument. I read the article itself (in its repository form). I’m not a sociologist nor a reader of sociological journals; that this article struck me, in its verbiage and its first half, as more anti-pornography tract than research article may be my failing. Maybe I’ve been “pornographicated” (to use a variant on the authors’ “pornographication”). I do see that women’s opposition (that is, women’s willingness to amend the Constitution so as to suppress pornography) declined from 53% in the late 1970s to 43% in the early 2010s—a 10% drop—while men’s “opposition” dropped from 34% to 23% (an 11% drop). (Flipping that, 77% of male respondents now agree with the Supreme Court’s view of the First Cites & Insights

Amendment as compared to 66% in the late 1970s, where female agreement with SCOTUS has gone from 47% to 57%.) But of course that leads into a whole bunch of statistical commentary. I seem to have been distracted from the purpose of the post itself, which is not to show that pornography should be outlawed but to show that the need to spend two years and four journals to get this article published means that the peer review system is “broken.” Does it do that? You’ll have to draw your own conclusions. I do want to pull out one specific item: the editor of Sex Roles explicitly asked the authors to cite more articles from that journal, or as the authors say “to pad the journal’s citation count.” That’s been tagged as one sign of questionable OA journals—which then raises the question of the status of the four journals.  Gender & Society (where it was rejected with four reviews, and it’s interesting that the reviewer who said the data wasn’t good enough to support the interpretation meets this comment from Cohen: “In a world with limited space for publishing research—which is not our world—this would be a good reason to reject the article.” So with limitless space, articles should be published even though the data don’t support the conclusions? Hmm…): A society journal published by SAGE; subscription access. It is not online-only (individual subscriptions appear to be printonly), so limited space is a legitimate issue. No indication of “hybrid” availability.”  Sex Roles (where it went through four review cycles, including the editorial request to pad citations for Sex Roles): a print and online journal from Springer; “hybrid” but fundamentally subscription—only 41 of 4,863 articles are OA (the APC is $3,000).  Social Forces (where it was rejected with two reviews, one of which felt that the data couldn’t answer the questions posed): a print and online journal (reasonably priced for both personal and institutional subscriptions, but personal subscriptions are print-only) published by Oxford University Press; “hybrid” ($2,800 fee), and I could find no way to determine how many OA articles there are or to search for OA articles. (An OUP person has said elsewhere that their “hybrid” journals have 4% to 6% OA takeup rate.)  Social Currents (where it was eventually published): a very young society journal published by SAGE (it began in 2014). I can’t determine from the home page whether there’s a print

December 2015

39

version; the only subscription offered is for an institutional e-only version. If there is a “hybrid” option I can’t find it. To sum up: these are all traditional journals, although one of them might be online-only (and that one, oddly, doesn’t seem to offer individual subscriptions). Two offer very expensive OA options, taken up rarely in one case and at an unknown rate in the other. The journal displaying one possible sign of questionable behavior is well-established and most definitely a traditional journal. Does this piece establish that peer review is broken, or does it reaffirm the skeptical old adage that “Peer review doesn’t determine whether something gets published, only where”? You got me.

Some perspective on “predatory” open access journals Why is this post, by John Dupuis on March 31, 2015 at Confessions of a Science Librarian, here rather than in PREDATORS! or one of the related sections? Because…well, here’s the start of the piece: Predatory open access journals seem to be a hot topic these days. In fact, there seems to be kind of a moral panic surrounding them. I would like to counter the admittedly shocking and scary stories around that moral panic by pointing out that perhaps we shouldn’t be worrying so much about a fairly small number of admittedly bad actors and that we should be more concerned with the larger issues around the limitations of peer review and how scientific error and fraud leak through that system. I’m hoping my methodology here will be helpful. I hope to counter the predatory open access (OA) journal story with a different and hopefully just as compelling narrative. First of all, after gathering together some of the stories about predatory OA journals, I will present some of what’s been written recently about issues in scientific peer review, it’s problems and potential solutions. Then I’ll be presenting a more direct counter narrative to the predatory one. First of all, I’ll present some information about the fantastic resource Retraction Watch. Then I’ll present some concrete case studies on how traditional peer reviewed commercial publishing fails in all the same way that supposedly predatory publishing fails. Finally, using the incredible work of Walt Crawford and others, I’ll gather some resources that will further debunk the whole “predatory” open access moral panic and further suggest that perhaps it isn’t the bogus OA journals that are the main source of “predatory” publishing, but rather that the big commercial and society publishers perhaps deserve that label more. Cites & Insights

Dupuis isn’t against peer review, but notes that it’s a human system and prey to the full range of human weaknesses. To that end, he offers several sets of citations—a few of them also cited and discussed in this roundup. There’s a long list of resources on peer review— much longer than what I’m considering here (with some overlap). Another useful list cites key resources by and about Retraction Watch. Another long list, on failures in mostly traditional journals through stupidity, error or fraud is prefaced with: Below are examples where big commercial or society traditional, subscription-based peer review have fallen short, either due to careless or insufficient review or fraud on the part of scientists. Of course, peer review will rarely catch genuine fraud as the books are cooked. But even fraud cases demonstrate the limits of peer review across all scholarly communication, not just in “predatory” open access journals. I would like to emphasize that this list is extremely selective. I’m mostly only highlighting particularly egregious examples that have made their way into the mass media or onto popular blogs. As above, for much much more, please visit Retraction Watch for more complete coverage. For example, The top 10 retractions of 2014.

I did say “long list,” didn’t I? It involves fraudulent research, falsified data, reviewing one’s own article(!) and more, including instances at Lancet, several Elsevier journals, British Medical Journal, Springer journals Science, SAGE journals, IEEE, Nature, Wiley journal(s)… There’s still more, and in all the post is a great resource. The comments are worth reading. (Full disclosure: Dupuis is a friend and colleague—he’s even one of the friends I’ve actually met in person.)

Elsevier retracting nine papers for fake peer review I am not going to include a laundry list of specific peer review problems. As you’ll find if you visit Retraction Watch, that list could go on for many tens of thousands of words. But I will note this item, by Ivan Oransky on October 13, 2015 at Retraction Watch. The specifics begin with this from Elsevier: Nine papers are being retracted from five Elsevier journals due to manipulation of the peer-review process that led to their publication. The retractions follow a thorough investigation using industry best practices as outlined by the Committee on Publication Ethics. The integrity of the editorial process was found to have been undermined by faked review reports linked to fictitious email addresses, provided to the journal as a suggested reviewer during submission.

December 2015

40

The piece notes that Elsevier has retracted more than 20 other articles for the same reason since 2012, and that the number of articles retracted for fake peer review across all publishers stands at about 260 (for those Retraction Watch is aware of). In one case, an article that falsely included one researcher’s name was reviewed by somebody faking that researcher’s name with a Hotmail account. On one hand: 260 articles over two years represents a tiny portion of 2.8 to five million articles published during that period: less than 0.01%. On the other, this is just one example of the extent to which peer review problems can occur in any journal, no matter how big or traditional the publisher is.

Do Academy Members Publish Better Papers? We’ll end this segment with Phil Davis’ October 13, 2015 article at the scholarly kitchen, which adds a twist to peer review that had never occurred to me. To wit, fellows of the American Academy of Microbiology have a special route to publication in mBio (a high-impact, highly-respectable, high-APC— $2,250 to $3,000 for full articles—interdisciplinary OA journal published by the American Society for Microbiology): once a year, they can select their own reviewers for an article, then submit the article along with reviews directly to the editor, frequently leading to publication within days of submission. (Fellows are those recognized for “outstanding contributions to microbiology.”) The editor isn’t required to accept these packages, to be sure—but does in about 85% of cases. Davis doesn’t directly address issues with this preferential “choose your own reviewers!” approach; he looks at whether those papers are “better” than other papers in mBio—that is, whether they have higher citation counts. His conclusion: Yes, until you normalize for age of publication, at which point there’s no significant difference.

Defective Peer Review! Your assignment: Read through these items—then determine what distinguishes “defective peer review” from “problematic peer review.” If your answer is “Crawford didn’t want to have too many items in one section”…No comment.

The narrative seems similar to that in the growing number of cases of peer review manipulation we’ve seen recently. What tipped off the editor was minor spelling mistakes in the reviewers’ names, and odd non-institutional email addresses that were often changed once reviews had been submitted, in an apparent attempt to cover the fakers’ tracks. Those “reviewers” had turned in reports across several journals, spanning several subjects. It would seem that a third party, perhaps marketing services helping authors have papers accepted, was involved. The publisher has let all of its external editors in chief know about the situation. To prevent it from happening again, authors will not be able to recommend reviewers for their papers.

The post adds more detail—including an update noting that, in many cases, editors didn’t use the phony author-suggested reviewers. A fair number of comments, including one that argues that authors should be able to recommend reviewers (a practice that strikes me as inherently problematic). One pseudonymous commenter was of another opinion: I never really grasped the concept of author-suggested reviewers. The public is also polarized about it, I know people who never suggest reviewers for their papers unless they are obliged during submission as they think it’s unprofessional, and then there are others who always do that as they are on the opinion that managing editors appreciate such help.

Another commenter notes that Nature, Elsevier, Sage and others have also been duped by fake reviewers: it’s not a problem specific to BMC or to OA. Then there’s this from Adam Etkin, referring to comments that seem to be attacking BioMed Central: Wait, the editorial office actually catches what appears to be an elaborate scam prior to accepting/publishing (some details still emerging) and somehow this shows they’ve done a BAD job or that somehow they’re to blame? I don’t follow this logic.

In other words, “Damned if you do, damned if you don’t.” Then there’s this (excerpted from “Marco”‘s comment: Another journal in which I am involved demands you select suggested reviewers from its Editorial Board and its Advisory Board.

Publisher discovers 50 manuscripts involving fake peer reviewers

Demands? Wow.

Ivan Oransky posted this on November 25, 2014 at Retraction Watch—and in this case, the publisher, BioMed Central, trapped all but five of them before publication.

Cat Ferguson, Adam Marcus & Ivan Oransky, at Nature on November 26, 2014. It’s related to the previous piece, but only in that it’s based on Retraction

Cites & Insights

Publishing: The peer-review scam

December 2015

41

Watch work and it’s a similar problem. This one’s a news story that covers a fair amount of ground, and I do recommend reading it directly (it’s a Nature news feature, wholly accessible). Some highlights:  One South Korean researcher’s manuscripts in a Taylor & Francis journal (“hybrid” but with only two OA manuscripts since 2011) were getting favorable reviews with suggestions for minor improvements—but the editor was surprised that the reviews were coming back in little as 24 hour. The reason? The journal invites authors to suggest potential reviewers, and the author provided names with email addresses that actually went to him or his colleagues. Result: 28 retracted papers (at the time, the journal was an Informa publication).  There have been at least six uncovered instances of “peer-review rigging” in 2013 and 2014, requiring retraction of more than 110 papers. These cases involved vulnerabilities in the publishers’ computerized systems and journals from Elsevier, Springer, Taylor & Francis, SAGE, Wiley and Informa. Perhaps the most extreme single case: a Taiwan engineer who established a citation ring and peer-review ring involving 130 accounts on ScholarOne, yielding 60 articles that were retracted. (I’d quote that, but it’s several paragraphs long. Fascinating stuff.) There’s quite a bit more—it’s a fairly long, wideranging article. Read the comments as well.

Is this peer reviewed? Predatory journals and the transparency of peer review. This short piece by Bonnie Swoger appeared November 26, 2014 on Information Culture—and, given the context of the two previous pieces, I’d suggest a rewording of the title: “One more peer-review failure.” But that wouldn’t have the scare-word impact of “Predatory journals.” The incident is the “Get me off your f**king mailing list” article and its “acceptance” by a journal with a $150 APC, Take away the “predatory journal” overlay, and Swoger is making a useful point that applies equally to nearly all peer-reviewed journals: most peer review is “shrouded in secrecy,” making it impossible for a reader to know what level of review actually took place. Some journals, like PeerJ and F1000Research, are making reviews more visible. This kind of transparency is a great way to combat the “predatory” journals. Users can see if a manuscript received a cursory look or a detailed review. They can see changes that Cites & Insights

were made between various drafts of the research. This process is central to modern science, but an outsider usually can’t see it at work. Greater transparency will be one important aspect of improving the peer review system.

Good points—weakened, in my opinion, by the focus on so-called “predatory” journals.

Elsevier retracting 16 papers for faked peer review Just a quick mention of this Cat Ferguson piece on December 19, 2014 at Retraction Watch, since it’s more of the same: one author gaming three traditional journals to get 16 articles published thanks to suspicious peer reviews. A long string of comments, the first of which has a “solution” I’ve seen elsewhere: “An Editor should never consider a paper or a reviewer from a private email!”—because, as we all know, there can be no decent scholarship outside of academia.

Stretching the “peer reviewed” brand until it snaps A commentary by David Rosenthal on January 6, 2015 at DSHR’s Blog, wondering about the value added by peer review and pointing to a bunch of different issues, including some of the posts mentioned in this section. There’s a lot here in a fairly dense piece (“dense” as in few wasted words, not “dense” as in unreadable or anything of the sort), so I’d suggest you read the piece directly and follow links as you wish. I’ll quote most of the last two paragraphs, after he’s cited some examples of peer review fraud and of high-ranking journals rejecting what turn out to be very important papers, sometimes without even going to peer review: These recent examples, while egregious, are merely a continuation of a trend publishers themselves started many years ago of stretching the “peer reviewed” brand by proliferating journals. If your role is to act as a gatekeeper for the literature database, you better be good at being a gatekeeper. Opening the gate so wide that anything can get published somewhere is not being a good gatekeeper. The fact that even major publishers like Nature Publishing are finally facing up to problems with their method of publishing that the scholars who research such methods have been pointing out for more than seven years might be seen as hopeful. But even if their elite journals could improve their ability to gatekeep, the fundamental problem remains. An environment where anything will get published, the only question is where (and the answer is often in lower-ranked journals from the same publishers), renders even good gatekeeping futile. What is needed is better mechanisms for sorting the sheep from the goats after the animals are published. Two key parts of such mechanisms will be annotations, and reputation systems.

December 2015

42

Perhaps post-publication peer review is the only realistic way to winnow—and I wonder just how realistic it is.

Concern raised over payment for fast-track peer review This one, by Daniel Cressey on March 27, 2015 at Nature’s news section, has to do with a different peer review issue: paying for expedited peer review. And it’s reporting on a sister journal: Scientific Reports— which is an OA journal but part of Nature Publishing Group. (It’s now a megajournal, growing from 205 articles in 2011 and 807 in 2012 to 2,553 in 2013 and 4,021 in 2014.) The journal’s APC is currently $1,495 (up from $1,350 when I checked it for The Gold OA Landscape 2011-2014)—but for an extra $750 as part of a pilot project, the journal promised to decide on the submission within three weeks. One of the “thousands of volunteer editors” on the journal found the fast-track option offensive and resigned. The article discusses his reasons for resigning and NPG’s reasons for the pilot, one that uses a third party to manage the peer review process. (It appears that the pilot has ceased; I find no mention of fast-track review on the home page.) There are some OA journals not from traditional publishers that offer fast-track reviews for an extra fee. When those fast-track reviews guaranteed seven day or less turnaround, as several did, I graded those journals “CP” (highly questionable journal for peerreview reasons). I think there are ethical issues with any two-tier review system (even if one tier is for “honored members” of a society), for that matter.

Peer review and the “ole boys network” Let’s close this section with another example of twotier review-and-acceptance mechanism, similar to the closing piece in the previous section—except that this time it’s a different journal, the Proceedings of the National Academy of Science or PNAS. The post is by Steve Caplan on October 23, 2011 at No Comment…and it’s actually a three-tier system. One special track is only for NAS members. “Contributors” can submit up to four manuscripts per year using a different system, in which the author secures comments from at least two qualified reviewers; the submission then includes the article, the reviews, and author’s responses. Then there’s the third track: Direct Submission. In this case, the author arranges a “pre-arranged editor” with PNAS. To quote Caplan and, in turn, the PNAS guidelines: Cites & Insights

There is another track–a relatively new track–that PNAS allows, that in my view is even worse than the NAS contributor mode: It’s called “Direct Submission.” What does this mean? It means that the authors have secured in advance a”pre-arranged editor”? Oh–that smacks of a Soviet era style “ole boys network.” Find an editor in advance–a friend, colleague, mentor, brother, sister–someone who will agree in advance to get the paper published. Have a look at this, again from the PNAS submission site: Prior to submission to PNAS, an author may ask an NAS member to oversee the review process of a Direct Submission. Prearranged editors should only be used when an article falls into an area without broad representation in the Academy, or for research that may be considered counter to a prevailing view or too far ahead of its time to receive a fair hearing, and in which the member is expert. If the NAS member agrees, the author should coordinate submission to ensure that the member is available, and should alert the member that he or she will be contacted by the PNAS Office within 48 hours of submission to confirm his or her willingness to serve as a prearranged editor and to comment on the importance of the work. Now this actually manages to get around not one, but two levels of review. After all, for the ordinaryperson’s peer review track, the editorial board/editor generally rejects 75% of the incoming papers without their even reaching peer review. The “pre-arranged editor” trick circumnavigates the need to go through this initial triage selection process, and shunts the paper directly into press. Pretty amazing, eh? All you have to say is that there isn’t enough general expertise on the board, or that the paper is–how do they put it? Here it is: Counter to a prevailing view or too far ahead of its time to receive a fair hearing. So if your paper is contrary to current views or “ahead of its time” (what the hell is that supposed to mean–and who decides this anyway?)–get a free pass. But the catch? You need to have a buddy on the editorial board. Otherwise, who will do this for you. You need to be part of the “ole boys network.”

Well…maybe. For one thing, papers with prearranged editors are footnoted to that effect. Looking at the journal’s website in late October 2015, that third method seems to have disappeared— and Contributed papers are identified as such. But then there’s access. In fact, although PNAS is a subscription journal (but one that makes articles available after a six-month embargo), all authors pay a $1,500 publication fee—plus a $1,000 or $1,350 surcharge for immediate OA. Actually, the PNAS website seems to reflect a lot of the issues in these two sections. For example:

December 2015

43

Authors must recommend three appropriate Editorial Board members, three NAS members who are expert in the paper’s scientific area, and five qualified reviewers.

The accusations against Beall are nonsense, as far as I know. There are comments. In one of them, from Ian Street, there’s this:

Not “may” but must.

The Lists! Oh, come on, you know what lists I’m talking about— and here are items, all but one from 2015, that refer to the lists, including three pieces based on a scholarly study based on the lists that I believe is seriously flawed—not so much as the methodology but the results—and that illustrates the difficulty of achieving clarity in the face of hyperbole.

Open Access and the Problem of Predatory Publishers The basis for this piece by Lisa A. Martin on August 2, 2015 at lisa a. martin//blog: since she had two papers published in a scholarly journal, she’s been getting lots of emails asking her to submit articles to other journals—and recently an invitation to become an Associate Editor. OK, fine, but she pushes the Beall lists (although she does use scare quotes around “predatory”). Hey, I get a fair number of emailed invitations to submit to probably-scammy journals as well, invitations that Gmail regularly traps as spam. Since I’m not a specialist in any STEM field, it’s easy to figure that they’re being sent for no good reason. And when I do look at them, they’re full of bizarre English. So far, I don’t believe I’ve been solicited to be an Associated Editor, but maybe I haven’t looked. The rest of the piece is amusing, since the invitation came from one of those “English as a fourth language” operations, and the publisher’s representative said some remarkably dumb things: Most of the email made no sense whatsoever, and was written in such appallingly bad English that I almost couldn’t be bothered to decipher it. There was talk of “stumbling rocks” and “funny lists”, extensive inappropriate capitalisation and an inability to correctly use the indefinite article. In one place he referred to “pear review”, and in another “peering reviewing” However, in the bits I could understand, he seemed to be giving Jeffrey Beall a right slagging off! Mills accused Beall of being a “mere librarian” of “fully questionable authenticity”, with “limitations” and a “shallow and misguiding” knowledge of research. He accused him of playing “dirty tricks” because he allegedly works for a “mafia” of “Commercial Capitalists”, paid commission by certain big-name publishers to champion their pay-for-access journals over those Cites & Insights

with an OA model. He even accused Mr Beall of setting up a fake Nigerian mirror of the GJI home page, though quite why he’d want to do that, I have no idea.

Predatory journals are terrible. But Beall is slightly controversial in that he really does seem to be extremely anti-OA (not just predatory “OA” journals).

To which Martin responded: Yes, I have read some opinions about Beall – he’s a controversial figure in the world of open access, it seems. Nevertheless, it’s pretty unprofessional of GJI to slag him off in an email, and whatever Beall’s motives are, he’s certainly got a good point about most if not all of the publishers on his list!

I am astonished that Martin has such in-depth personal knowledge of all the publishers and “publishers” on the lists that she can confidently label “most if not all” of them as predatory! Martin used to work for BioMed Central; I wonder if Beall’s recent attack on BMC would change her opinion?

The Robber Barons of Open Access Publishing This post by Jan Alexander van Nahl, on June 16, 2015 at Mittelalter, is an unfortunate example of the results of the lists and their uncritical acceptance as Truth by far too many scholars. I was a little taken aback by this portion of the opening paragraph: Indisputably, a thorough peer review is capable of improving any scholarly article, and even a rejection does not necessarily put an end to attempts at publishing a specific paper. However, at a time when scholars are advised to publish in so-called high-impact journals only, in order to collect research points to get more money and better positions, ‘peer review’ has grown to the ultimate advertising slogan for any journal in vague fear of losing relevance.

I suspect there are scholars who would dispute the assertion that peer review has improved every one of their articles, and I fail to see any connection between high impact and peer review. But that’s not why I’m mentioning this article (which I suspect was originally written in German). The author says medievalists “often face a very small number of options for publishing.” After a little more discussion comes this: “To put it in a nutshell: in the humanities, the quantity of academic output is more important than ever.” Then comes the payoff:

December 2015

44

In recent times, born-digital open access journals have become an alternative to handle this quantity: they are often less cleaved to the traditional restrictions of printed media, and open minded towards new types of peer review such as open peer commentary. Given the undeniable enrichment of medieval studies through such journals, what is the purpose of my brief contribution? Most medievalists, I assume, have subscribed to several newsfeeds, providing them with updates on virtually any topic within the field, including calls not only for papers, but also for reviewing and editing start-up journals. Some of these offers are quite tempting — all journals have started from scratch. Yet, all is not gold that has key words such as ‘humanities’, ‘digital’ or ‘medieval’ in its title. Occupying a popular term of medievalism and social criticism, I want to draw medievalists’ awareness to the growing number of twenty-first century robber barons on the publishing road. Being a global phenomenon, the problem of so-called predatory open access journals is already well-aware at many universities and research facilities, but, as far as I can see, still somewhat neglected (at least) within German academia. What is it all about? As mentioned above, it is certainly a good thing to be able to choose among several journals for publishing, and a journal offering prompt peer review and quick publishing is all the more welcome. But what if the author is informed about a fee to pay only after publishing? All the more, as a brief investigation indicates that such a fee ex post can easily range from several hundred to several thousand dollars per paper. And what if an established scholar never comes to know that she or he promotes a journal by being listed as a member of the editorial board for example? A pioneer in uncovering such unethical (though not necessarily illegal) publishing behaviour is Jeffrey Beall, academic librarian and associate professor at the University of Colorado (Denver), running the website “Scholarly Open Access”.Nevertheless, it is hard to define and assess the complex phenomenon we are talking about. Where is the line, one has to cross to be on “the dark side of publishing”?

There follow more quotations from Beall and what strikes me as a suggestion that most if not all OA journals are “wolves in shining armour.” The first point to be made here is about the suggestion that OA journals commonly bill authors for fees that are never mentioned on the site, with such hidden fees running to several thousand dollars per paper. Yes, it’s absolutely scammy to bill an author when no OA fee is mentioned—and I’ve marked any journal that doesn’t spell out its OA fees as to be avoided—but other than a few anecdotes, I’m not Cites & Insights

aware that this is a widespread problem. If it was happening hundreds or thousands of times a year, wouldn’t we hear about it? But there’s a broader point: is medieval studies really a common target of “predatory” publishing? I did a quick check (certainly not scientific or exhaustive), using “medieval” as a string to check both the Beall set I looked at last year and the DOAJ set. I found no active journals with that string in their titles in the Beall set. Not one. (I did find one empty “journal” from a “publisher” with 188 “journals” in its portfolio, none of them with any published articles.) So let’s look at DOAJ. There, I found an even dozen journals with “medieval” as a string somewhere in the title. Guess what? Not one of these journals charged an article processing fee. Not one. Nor was there any indication of concealed fees or trickery (which would get them bounced from DOAJ.) A broader search for “medieval” anywhere within the journal records, done at DOAJ itself on October 23, yields a surprising 39 journals. Not one of which charges APCs (and all have specified that they do not). So these robber barons make their fortunes by…not charging fees? Incidentally, those results shouldn’t be too surprising: very few OA journals in history, language & literature, or anthropology charge APCs. What we have here, as far as I’m concerned, is flat-out fear-mongering. With zero known examples of actual fee-charging OA journals related to medieval studies, the author still warns authors of “robber barons.” Because Beall.

Predatory Publishing Isn’t The Problem, It’s a Symptom of Information Inequality This piece, by Phill Jones on July 29, 2015 at Perspectives (a Digital Science blog), is particularly interesting, given that Phill Jones is also one of the chefs at the scholarly kitchen (and, like Rick Anderson, one of those I find well worth reading and arguing conversing with). I’m going to jump to the end of the post first, for reasons that may be obvious: If we’re going to move forward and have a reasonable discussion about predation, I think we need to do two things. 1.

December 2015

Librarians in the West, stop linking to Beall’s list as a way of dealing with predatory publishers. It’s divisive and the information on there is actually fairly redundant. Your patrons already know how to pick journals to publish in, or their supervisors do. 45

2.

Everybody in the industry needs to start thinking about predatory publishing as part of the larger global problem of information inequality and acknowledge who is actually being victimized.

By focusing our discussion on the predators themselves, we’re missing the bigger picture. If we’re going to fix this, we need to look at the conditions that have allowed these companies to thrive in particular markets and then look to try to solve or mitigate those larger issues.

Although I think even the term “predatory” misstates the situation, #1 here is a startlingly good statement. Would that people would listen. The rest of the post backs up these recommendations and notes that Beall’s list seems to have a, shall we say, regional bias (citing several sources). Jones offer this note about Beall’s methods (which would be more convincing were it not for his heated screeds against all open access): In my opinion Beall’s error is more technical than moral, in that he confuses common characteristics of certain predatory publishers with proof of predation. To put it bluntly, if spelling and grammar errors are how you detect bad publishers, you’re going to identify more non-native speakers.

tion: to wit, “troubled or emerging markets where information is not as freely available as it is in more established ones.” I would add “and where authors will find that their national and regional studies, and maybe all of their research, are dismissed by established journals as being insufficiently important.” Overall, a good read and a useful perspective. The comments are also worth reading.

‘Free-range’ activism on predatory practices does our industry a disservice Rob Virkar-Yates posted this on July 10, 2013 on the Semantico blog; I didn’t encounter it until mid-2015. It fits a bit too perfectly at this point in the roundup to leave out. Virkar-Yates doesn’t scare-quote “predatory publishing” and views it as a “serious issue that must be eradicated,” but then he gets to The Lists after some notes on previous posts related to language or location of possibly-sketchy publishers: This brings me smartly to the focus of both my previous posts: Beall’s list. Beall is possibly best described as a ‘free range activist’. One need not dig too deeply into his blog to find some pretty unpalatable nuances with regard to nationality and, even, race. Furthermore, he uses tactics more often used by the populist press— name and shame first and worry about the consequences later. His criteria and processes are self-invented and self-policed. An ‘advisory panel’ is mentioned but its member’s names and affiliations appear not to be published. In the absence of any formalised, industry-sponsored guidance Beall’s list has gained significant traction in the community and would seem to have become the de facto source of information on the subject.

From a Western perspective, I think the most interesting part starts here: I heard a fantastic talk by Donald Samulack of Editage which frankly addressed one aspect of predatory publishing that I think many are a little nervous to approach, but one that is vital to talk about. To sum up the issue, I want to start with the question that came to my mind when I first heard about predatory publishing:

If predatory publishing is such a big problem, how come I don’t know a single person who’s fallen for it? I’m not the only one. I’ve asked a lot of academics and almost all of them are baffled as to why publishers think that one particular brand of email spam is any more of a problem than any other. As Monica Garcia-Alloza, a mid-career academic from Cadiz, Spain stated in a talk that she gave at the Association of Subscription Agents conference back in February. “I get these emails every day but I don’t know a single academic who would fall for these obviously fake journals. I only publish in journals that I know about. Honestly, nobody would fall for this, it’s not a problem for me.”

As Jones said, looking at article lists in some of the sketchier journals does tell you who “falls for it”— although, frankly, this assumes that the authors are being deceived, an assumption I increasingly quesCites & Insights

I don’t think this is right. I would even suggest that, instead of leaving the definition and policing this important and emotionally-charged debate to a single individual with questionable cultural perspectives, we should come together as a global community.

Virkar-Yates suggests that STM or AAP—both, notably, bastions of subscription publishers—take on the issue, “together with significant input from those representing scholars in the global south as well as the north.” What he wanted to see agreed to: •

What exactly constitutes a ‘predatory practice’ and what are the criteria that we use to define such practices?



What process should be brought to bear in the identification and ‘outing’ of predatory publishers?



How can we ensure fairness and decouple this process from northern assumptions and misapprehensions?

December 2015

46



How do we police this process to ensure fairness and transparency and ensure that this process stays abreast of changing paradigms?

It’s still a blacklist-oriented approach, and in my opinion problematic for that very reason (and how does a publisher demonstrate that it’s no longer “predatory”?), but it would at least be an improvement. It went nowhere, as far as I can tell.

Jeffrey Beall and Blacklists This commentary by Hooman Momen appeared on August 4, 2015 at SciELO in Perspective. It’s a thoughtful essay, worth reading in the original, but I’ll quote a few excerpts. First, the “two main problems” Momen sees with Beall’s lists: 1.

The list is based on the opinions and judgements of a single person and, therefore, subject to the errors of judgement, prejudices and conflicts of interest inherent in such an approach;

2.

The list only includes open access journals, giving the impression that only this model of publication is subject to predatory and questionable practices.

And this: But let us return to Mr Beall’s list, an article he recently published makes it clear that his targeting of open access journals is, in fact, based on his keen dislike of the open access movement in general, which he believes is a conspiracy led by European socialists aimed at destroying for- profit publishing. Mr Beall is particularly scathing of gold open access stating that “Scholars should have never allowed a system that requires monetary transactions between authors and publishers”, but appears to have no problem with subscription journals levying page charges, a clear case of double standards. Mr Beall also appears to have a deep mistrust of academic publishing in the developing world. He regularly puts new publishers from these countries on his list until they can “prove” their credentials creating added difficulties for publishers in these countries. A case in point is MedKnow, a publisher of reputable journals in the Middle East and Asia, including the journal of a regional office of the World Health Organization. This publisher was added to his watch list, presumably because it was based in India. The publisher was then acquired by Walters-Kluwer and the journals suddenly becoming safe in Beall’s worldview as the publisher disappeared from the watch list.

The article closes with this paragraph—and I would only note that I believe the desired “whitelist” is getting there, namely DOAJ: Blacklists, particularly those created without due process, are morally perilous and it is time that Beall’s list is replaced with a list of reputable journals. Cites & Insights

Predatory journals are only one problem in an increasingly ethically challenging publishing environment particularly for inexperienced researchers. The new list should be impersonal, widely available for consultation, backed by academic organizations and with transparent exclusion and inclusion criteria but this is the subject for another post…

‘Predatory’ Publishing Up That’s the article title for this Carl Straumsheim piece on October 1, 2015 at Inside Higher Ed; the web title is a bit more alarming: “Study finds huge increase in articles published by ‘predatory’ journals.” In either case, I applaud Straumsheim (or the editors) for using the appropriate scare quotes. This is one of several articles publicizing “‘Predatory’ open access: a longitudinal study of article volumes and market characteristics“ (by Cenyu Shen and Bo-Christer Björk in BMC Medicine). It’s a fair report, and includes a key caveat here: Shen and Bjork did their study based on a sample of 613 journals, and write that their results should be treated as a “rough estimate.”

I can’t fault the reporter for doubting the article’s “estimate” that articles in “predatory” journals jumped from 53,000 in 2010 to more than 420,000 in 2014. That’s what the article says; I’m convinced that 420,000 is so “rough” as to be useless and misleading, and am disheartened by the fact that the authors will not even discuss the possibility that their sample (of 613 journals out of more than 10,000) might be less accurate than a full survey of the journals. But we’ll get to that a bit later. The report naturally features a graph showing this incredible growth in “predatory” publishing—after all, it’s a shocker! It also quotes Heather Joseph’s comment that the increase is “stunning”—which, if it was accurate, would be true enough. The report also cites country-of-author findings from the Shen/Björk study, without noting that those findings are based on some 262 articles, or just over one-twentieth of one percent of the supposed total “predatory” publishing, and a number so small that any conclusions are (to this eye, at least) questionable. More from Joseph: “The pressures that current academic evaluation practices place on researchers are a contributing factor to the rise of these predatory journals,” Joseph wrote. “The practice of judging authors on where an article is published rather than on the quality of the information in the article itself is clearly one that needs to be challenged. And while this study illus-

December 2015

47

trates what happens when this practice is systematized on a national level, the reality is that it is a persistent problem in academic institutions around the world, and needs to be addressed.”

for comments by another reporter (haven’t seen whether the piece has appeared and whether I’m quoted), and the core of my comments was that it’s hard to build good research based on junk, and I regard Beall’s lists as junk, especially given his repeated condemnation of all OA–and, curiously, his apparent continuing belief that author-side charges, which in the Bealliverse automatically corrupt scholarship, only happen in OA (page charges are apparently mythical creatures in the Bealliverse). So, Beall gains even more credibility; challenging him becomes even more hopeless.

Here’s an interesting paragraph in the report: India’s and China’s populations obviously skew the geographic breakdown. When it comes to the number of articles per capita, another country comes out on top: Nigeria. The researchers calculated each country’s ratio of articles published in predatory journals to those indexed in Thomson Reuters’s Web of Science. Nigeria had the highest ratio—1,580 percent—followed by India (277 percent), Iran (70 percent) and the U.S. (7 percent).

Noting that these figures are extrapolating further based on a tiny little sampling of articles, one might reasonably ask whether they say as much about biases of Western journals and Thomson Reuters as they do about “predatory” publishing. Of course the reporter went to get responses from Beall—who, not surprisingly, didn’t find it sufficiently anti-OA. At best, Beall’s critics get links and very brief quotes; none is named or interviewed. The comments are interesting.

Predatory journals published 400,000 papers in 2014: Report This piece, by Alison McCook on September 30, 2015 at Retraction Watch, is, I’m afraid, worse than the previous report. McCook doesn’t scare-quote “predatory” (which the article itself did), and while Beall gets his chance to deride the authors for not being negative enough about OA, there is no indication of any sort that Beall’s lists could be anything but Gospel: No links, no critical comments, nada. I include this at all because of some of the comments, including my involvement. Lars Bjørnshauge of DOAJ questioned the findings, saying the numbers are “way beyond my expectation,” and quoted portions of my own commentary. Here, I’ll quote the whole post (which you can find here): Careful reading and questionable extrapolation On October 1, 2015 (yesterday, that is), I posted “The Gold OA Landscape 2011-2014: malware and some side notes,” including this paragraph: Second, a sad note. An article–which I’d seen from two sources before publication–that starts by apparently assuming Beall’s lists are something other than junk, then bases an investigation on sampling from the lists, has appeared in a reputable OA journal and, of course, is being picked up all over the place…with Beall being quoted, naturally, thus making the situation worse. I was asked Cites & Insights

When I’d looked at the article, twice, I’d had lots of questions about the usefulness of extrapolating article volumes and, indeed active journal numbers from a rather small sampling of journals within an extremely heterogeneous space–but, glancing back at my own detailed analysis of journals in those lists (which, unlike the article, was a full survey, not a sampling), I was coming up with article volumes that, while lower, were somewhere within the same ballpark (although the number of active journals was less than half that estimated in the article. (The article is “‘Predatory’ open access: a longitudinal study of article volumes and market characteristics” by Cenyu Shen and BoChrister Björk; it’s just been published.) Basically, the article extrapolated 8,000 active “predatory” journals publishing around 420,000 articles in 2014, based on a sampling of fewer than 700 journals. And, while I showed only 3,876 journals (I won’t call them “predatory” but they were in the junk lists) active at some point between 2011 and June 2014, I did come up with a total volume of 323,491 articles–so I was focusing my criticism of the article on the impossibility of basing good science on junk foundations. Now, go back and note the italicized word two paragraphs above: “glancing.” Thanks to an email exchange with Lars Bjørnshauge at DOAJ, I went back and read my own article more carefully–that is, actually reading the text, not just glancing at the figures. Turns out 323,491 is the total volume of articles for 3.5 years (2011 through June 30, 2014). The annual total for 2013 was 115,698; the total for the first half of 2014 was 67,647, so it’s fair to extrapolate that the 2014 annual total would be under 150,000. That’s a huge difference: not only is the article’s active-journal total more than twice as high as my own (non-extrapolated, based on a full survey) number, the article total is nearly three times as high. That shouldn’t be surprising: the article is based on extrapolations from a small number of journals in an extremely heterogeneous universe, and all the statistical formulae in the world don’t make that level of extrapolation reliable.

December 2015

48

Shen and Björk ignored my work, either because it’s not Properly Published or because they weren’t aware of it (although I’m pretty sure Björk knows of my work). They say “It would have taken a lot of effort to manually collect publication volumes” for all the journals on the list. That’s true: it was a lot of effort. Effort which I carried out. Effort which results in dramatically lower counts for the number of active journals and articles. (As to the article’s “geographical spread of articles,” that’s based on a sample of 205 articles out of what they seem to think are about 420,000. But I didn’t look at authors so I won’t comment on this aspect.) I should note that “active” journals includes those that published at least one article any time during the period. Since I did my analysis in late 2014 and cut off article data at June 30, 2014, it’s not surprising that the “active this year” count is lower for 2014 (3,014 journals) than for 2013 (3,282)–and I’ll agree with the article that recent growth in these journals has been aggressive: the count of active journals was 2,084 for 2012 and 1,450 for 2011. I could speculate as to whether what I regard as seriously faulty extrapolations based on a junk foundation will get more or less publicity, citations, and credibility than counts based on a full survey–but carried out by an independent researcher using wholly transparent methodology and not published in a peerreviewed journal. I know how I’d bet. I’d like to hope I’m wrong. (If not being peer-reviewed is a fatal problem, then a big issue in the study goes away: the junk lists are, of course, not at all peer reviewed.)

“205” is wrong but the number of articles is still tiny. I didn’t go back to 2010 in my full survey (not a statistical sample). I have gone back and looked at the numbers for the “Beall subset”—that is, journals in the Beall list that are not also in DOAJ. (I honestly don’t believe that these journals are all or mostly predatory; the evidence for all of MDPI’s journals being junk, for example, is simply lacking.) Here’s what I find: 54,540 articles in 2011; 85,601 in 2012; 95,698 in 2013; and 67,647 in the first half of 2014, which might extrapolate to somewhere between 135.294 and, say, 150,000 for the full year. (For some reason, I was overcounting 2013 in my blog post.) A growth from 54,540 articles to 150,000 is substantial—but 150,000 is a far cry from 400,000. How did the authors of the article respond to this information? As quoted in another comment after one of the coauthors was contacted: Björk states: “Our research has been carefully done using standard scientific techniques and has been peer reviewed by three substance editors and a statistical editor. We have no wish to engage in a possibly heated discussion within the OA community, Cites & Insights

particularly around the controversial subject of Beall’s list. Others are free to comment on our article and publish alternative results, we have explained our methods and reasoning quite carefully in the article itself and leave it there.”

In other words: “Our 6% sample done using Proper Techniques trumps a 100% actual survey done by a layperson.” I find that sad…but I do note who’s getting all the mileage out of this.

Predatory publishers earned $75 million last year, study finds Did I say that the Retraction Watch article was worse than the Inside Higher Ed report (which was actually pretty well done)? Here’s John “Sting” Bohannon on September 30, 2015 at Science Insider, and all stops are out right from the beginning: Need to get your research published? Don’t want to be hassled by peer review or editorial quality control? You are in luck: There are thousands of scientific journals waiting to publish it right away, for a fee. A new study finds that the fake journal business is booming—and puts some hard numbers on this murky academic underworld.

Bohannon says the team “combed through hundreds of discredited academic journal websites”—thus, these are neither questionable nor “predatory,” they’re now “discredited.” Why? Because Beall says so, and Bohannon gives Beall room to go even more overboard. Where did they get their list of predatory publishers? From Jeffrey Beall, the librarian at the University of Colorado, Denver, who both coined the phrase and for years has curated an online black list of publishing bad guys. Those on the list are “the worst of the worst,” Beall says. “In the vast majority of cases there is almost universal agreement that any particular publisher or journal deserves to be on the list. These are journals that sport fake impact factors, that promise a 1-week peer review, that publish tons of papers that contain plagiarism, and that annoy researchers worldwide with doltish spam.”

Given Beall’s clear opinion of OA in general, maybe he actually believes that these journals are “the worst of the worst,” the less-worst presumably being all the rest of OA. Note also Beall’s claim of “almost universal agreement” based on…nothing whatsoever. As you’d expect, Bohannon finds someone else to Point With Alarm. As you’d expect, the presence of any doubts as to the perfection of Beall’s list or the accuracy of the results is nonexistent.

December 2015

49

Backlash after Frontiers journals added to list of questionable publishers This article by Mollie Bloudoff-Indelicato appeared on October 23, 2015 at Nature. It recounts some of the reactions to Beall adding Frontiers to his blacklist because he’s heard of “wide disapproval from scholars.” One early response (from an associate editor at a Frontiers journal) noted: “Frontiers being added to Beall’s list reveals the big weakness of Beall’s list: It’s not based on solid data, but on Beall’s institution.”

In response, Beall cites “dozens of e-mails” and two questionable articles in Frontiers journals. A statement from Frontiers noted its credentials and added: “Dubious actions as such by an individual with a long history of opposing Open Access publishing serve only to create confusion that slows down the development of Open Access publishing.”

Then there’s Neuroskeptic, who disagrees with the Frontiers decision but just loves blacklisting, and says “the grand majority of these publishers are seriously dodgy,” presumably based on their own independent knowledge. The article does note that The Holtzbrinck Group is both part-owner of Frontiers and partowner of Springer Nature, Nature’s parent company.

Is Frontiers a potential predatory publisher? In this title for Leonid Schneider’s October 28, 2015 post at For Better Science, the author does something unusual: using one of Beall’s three-p qualifiers for his lists (potential, possible, or potential). He offers some of his own “recent investigations” into the situation with Frontiers, which (he notes) was founded by two neuroscientists. (There’s an appropriate cartoon in the original, not reproduced here.) It’s an interesting and fairly long discussion, which harks back to the post about Frontiers much earlier in this roundup. Schneider notes a “key guideline” for Frontiers: [E]ditors must always “allow the authors an opportunity for a rebuttal.”. Associate editors are namely instructed to always “consider the rebuttal of the authors”, even “if the independent reviews are unfavourable.”

He says that, since the firings, Frontiers in Medicine has operated without an editor-in-chief and with few chief specialty editors. “There appear to be few people in a position to provide oversight, while the associate editors handle manuscripts which they often receive directly from authors.” Cites & Insights

There’s a lot more about Frontiers editorial practice; Schneider says that, once an article has gone out for review, “Frontiers editors have hardly any option to reject it.” And there are a lot of editors: Frontiers’ own website makes a point of the publisher having “54 open access journals, 55,000 editors, 38,000 articles.” Schneider seems to feel that Frontiers practice is heavily biased toward publication: Frontiers’ philosophy is to give all authors a chance to publish their work in one of their journals. In basic science, this is, to a degree, a laudable approach indeed. Many scientists convincingly argue that every single research study should be published and judged by the scrutiny of scientist colleagues in postpublication peer review. Yet this option is not available at Frontiers, and while the reviewers are named, their peer review reports are kept confidential. This concept to publish almost every manuscript, while keeping the peer review process rather opaque, has possibly contributed to the recent placing of the publisher Frontiers on the Beall’s List…. {A]t Frontiers, the rejection option is not always available. Generally, a peer reviewer can only withdraw from the peer review; recommending rejection is not an available option. If a reviewer does withdraw, the handling editor is automatically prompted to find a replacement reviewer. Theoretically, this can go on back and forth until two positive peer reviews are finally obtained. Occasionally, associate editors skip the search for willing reviewers altogether and perform the peer review themselves.

There’s quite a bit more. There are also comments. The first is from Gabriel Finkelstein, who coincidentally works at the same institution as Jeffrey Beall. In part: My own experience at Frontiers in Neuroscience was quite positive. I was invited to contribute an article on paradigm shifts in neuroscience, and even though most historians of science now reject the concept I rose to the challenge based on the quality of the other contributors to the issue. Frontiers’ peer-review process was transparent. My reviewer made reasonable objections, most of which I accepted, but some (e.g. “rewrite your article to argue against paradigm shifts”) I could not for obvious reasons. My editor was also reasonable, and my manuscript went to press quite rapidly. I balked at paying to publish, but in my case the fee was waived. I think that the dangers that you and Jeff Beall identify are real. On the other hand, my experience with Frontiers was no worse than those I’ve had with traditional journals in my field, some of which acted much worse in their choices of peer reviewer or in

December 2015

50

pressuring me to sign off on deficient and even plagiarized manuscripts. I also like reaching an audience of scientists, not historians, and am happy that 500 people have viewed my essay within a month of its publication. It usually takes ten years to garner this many readers in traditional journals.

I love this, since it’s one of Beall’s criticisms that always struck me as ludicrous:

There’s also a commentary by Frederick Fenter (Frontier’s executive editor) in response to Schneider’s questions: worth reading. On Frontier’s side, a comment from Virginia Barber, chair of COPE (Committee on Publication Ethics) offers a statement on the publisher’s continuing membership in COPE, including these two paragraphs:

Although, I suppose, given that junk-science articles have notoriously appeared in Science and Nature (don’t know about Cell)… PMR also thinks “we should highlight the equally awful (if not worse) practices of closed access publishers.” Well said.

We note that there have been vigorous discussions about, and some editors are uncomfortable with, the editorial processes at Frontiers. However, the processes are declared clearly on the publisher’s site and we do not believe there is any attempt to deceive either editors or authors about these processes. Publishing is evolving rapidly and new models are being tried out. At this point we have no concerns about Frontiers being a COPE member and are happy to work with them as they explore these new models.

Two dozen comments—including one from Beall, who begins:

I’m simply not qualified to judge Frontiers; this post and the comments at least add some detail to the situation, if perhaps not shedding clear light. As to the title’s question, however, my answer increasing is that every publisher is a potential predatory publisher, if you define “predatory” loosely enough.

Beall! Last and least, items about other aspects of Jeffrey Beall’s ongoing efforts. I will be as polite as possible.

Beall’s criticism of MDPI lacks evidence and is irresponsible

backbone of knowledge. I would expect to find errors, as I would in any paper. I

Beall’s criticism that these are “one-word” titles is ridiculous and incompetent. They are accurate titles.

All publishers have junk articles and fraudulent articles. We don’t know the scale… By default I would say that a paper in Molecules is no more or less likely to be questionable than one in a closed access journal from Elsevier or ACS.

This post is a good example of how Brits in particular and Western Europeans in general have been brainwashed into thinking that individuals should not make any assertions and that any statements, pronouncements, etc. must come from a committee, council, board, or the like. This suppression of individuality is emblematic of the intellectual decline of Western Europe. This suppression is laying the foundation for the erosion of individual rights in Europe and the forced imposition of groupthink throughout the continent.

Wowser. Beall later says that SPARC and other “OA organizations” are “afraid to admit that their OA fantasies are…just fantasies.” That SPARC is an “OA organization” will come as a surprise to the subscription journals that began as SPARC initiatives, no doubt.

Predatory publishers: who CAN you trust?

So says Peter Murray-Rust on February 18, 2014 at petermr’s blog. Among other things, PMR doesn’t think a blacklist should be maintained by one person lacking discipline knowledge. Later, he strengthens that: “It is wrong that a single person can destroy a publisher’s reputation.” (I’m leaving this item, even though MDPI is no longer on the blacklist, because of the other items it raises.) He has some experience with some MDPI journals and doesn’t find them questionable; in fact, he data mines from three MDPI journals. He says he’s read hundreds of MDPI articles:

This comes from Helen Dobson on May 1, 2015 at Library Research Plus, a blog at the University of Manchester Library.

If I were to review a paper in any of them I would assume it was a reasonably competent, relatively boring, moderately useful contribution to science. The

That’s the lede; the rest describes what Manchester is doing and why “our views are not aligned with Beall’s.” Some key points:

Cites & Insights

One of our responsibilities as OA advisors at Manchester is to keep track of so-called predatory publishers, and advise our researchers on publishers they should avoid. It can be hard to separate wheat from chaff, so we rely, where possible, on others helping to do this. Until recently, we recommended Jeffrey Beall’s list, a well-known directory maintained in the US. However, we have now removed the link, and will no longer advise our colleagues to use it. Here’s why …

December 2015

51

 As with other UK universities, apparently, payments for APCs seem to go mostly to “hybrid” journals, and there’s “an urgent need for offsetting schemes to address the issue of doubledipping.”  The lack of transparency about costs across the board is an issue (think non-disclosure agreements, for example). “What’s ethical about this lack of transparency? It’s practically the OED definition of predatory.”  It’s worth engaging researchers about OA issues—not only the advantages but the barriers—and to encourage researchers to think, not just follow lists. There’s more. It’s a good read, and includes a link to a fairly peculiar set of slides used in a Beall presentation to STM, which forms part of the background for the article. As for the title itself: see my little essay at the very end of this roundup. Maybe “trust but verify” is the best you can ever do.

I might be inclined to call the first paragraph “meretricious nonsense” and the final sentence of the second paragraph “paranoid meretricious nonsense,” but that could be interpreted as an ad hominem attack. Let’s just say that, in my experience with OA over the last 26 years, neither is even remotely close to being true. The rest…well, it’s the rest. I would comment on the article appearing in what appears to be an OA journal (every article I clicked on came up immediately), but in fact Academe is a magazine, not a scholarly journal. It does have a one-word title, so I guess if it was a scholarly journal, it would be suspect… (By the way, “platinum open access” is a nonstandard term—even though Beall has elsewhere selectively quoted me, oddly enough, in an attempt to make it seem legitimate—for that portion of gold OA that is funded by means other than APCs.) While I’m not responding to this piece at any length, I will suggest you read:

What the Open-Access Movement Doesn’t Want You to Know

Posted by Philip Young on May 19, 2015 at Open@VT. Young recently joined AAUP “and today was dismayed to see” the article above. I should note that Young praises Beall’s lists as “a tremendous service to the scholarly community,” so it’s not as though we’re dealing with a hater…but he follows that praise with “he has unfairly blamed these problems on open access as a whole.”

This article by Jeffrey Beall appears in the May-June 2015 issue of Academe, published by AAUP, the American Association of University Professors. I suggest you go read it yourself. And read the comments. Maybe this is enough to quote: The open-access movement is a coalition that aims to bring down the traditional scholarly publishing industry and replace it with voluntarism and server space subsidized by academic libraries and other nonprofits. It is concerned more with the destruction of existing institutions than with the construction of new and better ones. The movement uses argumentum ad populum, stating only the advantages of providing free access to research and failing to point out the drawbacks (predatory publishers, fees charged to authors, and lowquality articles). It’s hard to argue against “free”— and free access is the chief selling point of open-access publishing—but open-access promoters don’t like to talk about who has to pay. Few dare to disagree publicly with the open-access advocates’ proclamations; those who do are stigmatized. The openaccess movement has become so institutionalized that it even has a police department (a volunteer one, of course), whose members verbally attack anyone who has the courage to question the movement’s ideals or its proponents’ motives or to point out its weaknesses and unintended consequences. Cites & Insights

A Response to Jeffrey Beall’s Critique of Open Access

It became apparent just how off the rails Beall had gone when he published The Open-Access Movement is Not Really about Open Access in the journal TripleC (in the non-peer reviewed section; also see Michael Eisen’s response, Beall’s Litter). If you enjoy right-wing nuttiness (yes, George Soros is involved) you really should read it.

I’ve already discussed that article and some of the reactions in an essay that Beall regards as bullying. Beall’s critiques of open access are not always as factual as they could be, so as an open access advocate I am concerned when his polemics are presented to an academic audience that may not know all the facts.

What Young does here is not quite a fisking of Beall’s article, but it’s close enough that you’re better off reading it directly. A couple of gems: But in criticizing predatory publishers (again unfairly extending his critique to all open access publishing) he gives subscription publishing a free pass. If you don’t think bad information has appeared in prestigious peer-reviewed subscription journals, try searching “autism and immunization” or “arsenic life.”

And this:

December 2015

52

Prestige means that mostly we are paying for lots of articles to be rejected, which are then published elsewhere.

A good discussion, with some interesting comments, such as this from Mike Taylor: I have just one quibble with your response. You quoted Beall’s assertion “Subscription journals have never discriminated on the basis of an author’s ability to pay an article-processing charge”, but omitted to note that this is simply false. Most subscription journals levy page-charges for articles over a given length, or for colour illustrations — something that most Gold-OA journals do NOT do. In fact, substantial and well-illustrated articles may well cost an author more to publish in a typical subscription journal than in most Gold-OA journals.

I was pleased to see that, when one commenter wondered how many OA articles have been published without fees, Lars Bjørnshauge responded with some of my own figures. For the record, and these are newer and more complete numbers than those in his comment: Among the DOAJ journals included in The Gold OA Landscape 2011-2014 (grades A & B), 7,048 did not charge APCs. Those journals published 177,855 articles in 2011, 198,552 articles in 2012, 206,561 articles in 2013 and 206,588 in 2014. That’s a total of 789,556 articles during the four-year period in serious gold OA journals without author-side fees. (Lars’ comment shows 470,000-odd articles from 2012 through 2014, but that was from the smaller subset of journals with English-language interfaces.)

Confusing Criticism With Bullying T. Scott Plutchak posted this on July 8, 2015 at T. Scott. It takes off from “Beyond Beall’s List: Better understanding predatory publishing“ in College & Research Library News, discussed earlier. I was delighted when I read Berger & Cirasella’s Beyond Beall’s List: Better understanding predatory publishers in the March issue of College & Research Libraries News. Here was a well-balanced critique, lauding Beall for bringing attention to a serious problem, while also pointing to some of the justified criticisms he’s received for the lack of rigor in his methodology and his clear antipathy to open access in general. As the title of the piece indicates, the authors recommend going “beyond Beall” to consider additional factors when making a determination about the quality of a particular journal. I was happy to see it because I worry that some librarians and authors use Beall’s list uncritically as definitive. This article did a very good job of acknowledging the important contribution that Beall Cites & Insights

has made while putting it into the larger context of issues to be considered.

So far, so good, right? But there’s more—and I’m afraid I’m winding up quoting quite a bit of a post in a blog that doesn’t clearly carry a CC license, but I suspect T. Scott will forgive me. (He says it better than I could, and with none of my personal bias; T. Scott and I have been known to disagree on OA and other issues, always with mutual respect.) Beall didn’t see it that way. In his petulant letter to the editor in the June issue, he complains about those who seek to discredit him. He makes a number of interesting assertions, the most peculiar of which might be his claim that pretending that predatory journals don’t exist is a “common strategy among academic librarians.” I do wish he’d provided some sourcing for this. I try to follow this topic fairly closely and I’ve never seen any academic librarian anywhere make such a claim. But it was his reference to “feeling bullied” by Walt Crawford (who he doesn’t mention by name, but attempts to discredit by referring to him as “an author who writes and self-publishes his own non-peer-reviewed journal”) that particularly caught my eye and raises issues about how critical discourse is conducted in our highly emotional and discordant times. By using the highly charged word “bullied,” Beall seeks to pull attention away from the content of Crawford’s critiques to his own subjective sensitivities. If Crawford is being a bully, then right-thinking people need to come to Beall’s defense, not because he’s right on the merits of the critique, but because bullying is bad. By treating the critiques as if they were an ad hominem attack, Beall attempts to deflect attention from the substantive issues. Make no mistake — Crawford’s criticism was strong and in-depth and surely must have stung. But it was also rigorous and well-sourced. Harsh, perhaps, but scarcely “bullying.”

When I first encountered that “I won’t mention his name, but bullying” remark, I was amused, if only because Beall’s lists are, in fact, self-published and not peer-reviewed. But never mind. I honestly hadn’t thought about Beall using “bullying” as a way to deflect criticism, but that sounds right. The rest of the post deals with similar issues of language and tone, focusing on events in a certain Facebook group. Those or similar events were what caused me to drop out of that group, and I’d guess I could have named at least one of the participants. I thought the group was a worthwhile forum at first; maybe it is, but the sheer pigheadedness and abuse of a couple of folks finally drove me away.

December 2015

53

T. Scott is concerned about the tenor of discourse of these times. He makes good points. Here’s what he says about Beall at the end:

roundup), including his discussion on why resisters to gay marriage are feeling bullied.

Beall’s record of responding to his critics makes clear that reflection wouldn’t have tempered his response much (although one can imagine that the first draft of his letter displayed an even greater sense of unjustified persecution), but his easy resort to the claim of bullying is very much in keeping with the tenor of discourse of the times.

The next few pieces relate to a July 30, 2015 post in Beall’s blog, one that compared SciELO to a favela (a Brazilian slum), contrasting it to the “nice neighborhood” of Elsevier and friends. I was astonished when I read the post, given that SciELO seems to be a highly successful platform for hundreds of gold OA journals (most but not all without APCs), a platform I found congenial as a researcher. (Technically, there are 15 platforms in 15 different countries, but the original in Brazil is by far the largest.) My thought at the time: “If you can’t attack an OA publisher or aggregator for being ‘predatory,’ come up with something else to discredit them.” It’s fair to say that some folks were less than delighted with Beall’s article. We’ll start with:

I did comment on the post, a comment that for some bizarre reason (possibly related to Google+) appeared with a link but not my name, and the link doesn’t yield my name. Here’s my comment: Thank you for that. I was so dumbfounded by the charge of “bullying”—which, as you note, didn’t name me but could have referred to nobody else— that I didn’t even respond. What I’ve done is try to bring facts to bear on the situation, spending hundreds (literally) of hours in the process. If that’s bullying, then we’re in deep trouble.

Marcus Banks commented on T. Scott’s post in “How to Criticize—From Jeffrey Beall to Marriage Equality,” posted July 8, 2015 at Marcus’ World. He’s not primarily focused on the Beall incident; he’s more interested in how we go about criticizing and responding. But I can’t resist quoting this paragraph, even though I’m afraid I must disagree with it: Back story: Jeffrey Beall is a librarian who has for many years kept tabs on “predatory” (dubious) open access publishers. Somewhere along the way this useful service morphed into an ugly and unmoored critique of open access publishing more generally. People have noticed, and fierce resistance has commenced. Once a useful and diligent beacon, Beall has since become a pariah. He does not like this, and now claims to have been “bullied” for offering heterodox views.

Beall has not become a pariah; if anything, he seems to be getting even more speaking invitations and his lists are turning up in even more places—certainly helped by the unfortunate Shen/Björk paper and the coverage it’s received. (I’m not going to cite more instances, but one I just saw in an Indian publication was truly saddening: it seemed to regard the lists as such Gospel that the fact that some journals are in both DOAJ and the lists was taken as an indication that DOAJ was faulty—apparently, Beall can never be wrong.) I am not suggesting that Beall should be a pariah; I simply believe that his lists are useless and harmful. And since I believe blacklists are the wrong approach in any case, that shouldn’t come as a surprise. The rest of Banks’ post is interesting and worth reading (although it’s not directly relevant to this Cites & Insights

The SciELO Incident

Motion to repudiate Mr. Jeffrey Beall’s classist attack on SciELO This appeared August 2, 2015 at SciELO in Perspective. It’s reproduced here without modification. By the Brazilian Forum of Public Health Journals Editors and the Associação Brasileira de Saúde Coletiva (Abrasco, Brazilian Public Health Association) Jeffrey Beall, an American librarian who gained notoriety publishing a list of open access publishers and journals considered as “predatory” by him, posted in his blog an unbelievably mistaken and prejudiced article, beginning with its title, “Is SciELO a Publication Favela?”1 Based on an ethnocentric and purely commercial point of view, Mr. Beall supposes that, since the whole ensemble of its publications are not indexed by Thomson Reuter’s bibliographic database, and because of the discontinuation of a proposal by a Brazilian government agency to hire a commercial publisher to disseminate some of the nation’s periodicals, SciELO’s publications would be “hidden from the world” (sic). Seemingly in order to promote commercial publishers, Mr. Beall despises the asset that the SciELO collection represents, and makes factually incorrect assertions. Contrary to his statements, the whole collection is already indexed in the Scopus database. Also in opposition to another of his mistaken affirmations, SciELO has adopted for some time the Creative Commons license, which means that there is no risk of an article “losing its interest” due to author’s copyright issues. One paragraph in particular demonstrates the prejudices, classism, imperialism and crass commercialism present in the tone of Mr. Beall’s diatribe: “Thus, commercial publisher platforms are nice neighborhoods for

December 2015

54

scholarly publications. On the other hand, some openaccess platforms are more like publication favelas.” As a counterpoint to this neocolonial point of view, a recent article by Vessuri and colleagues emphasizes the contribution of initiatives such as SciELO and Redalyc (also targeted by Mr. Beall) to the development of science in Latin America and the world: “In fact, Latin America is using the OA publishing model to a far greater extent than any other region in the world. Also, because the sense of public mission remains strong among Latin American universities, the effectiveness of open access for knowledge sharing was heard loud and clear. (…) These current initiatives demonstrate that the region contributes more and more to the global knowledge exchange while positioning research literature as a public good.”2 Contrary to the classist disgust that favelas elicit from Mr. Beall, we would like to reiterate that they are a kind of neighborhood where a sizable portion of the Brazilian population, which uses the nation’s healthcare system and is ultimately the source of funding for the Brazilian science itself, resides. Discrimination and prejudice against these Brazilian citizens is inadmissible. If the only alternatives for scientific publishing are either inhabiting the gated communities of the 1% of the world population which concentrates wealth at the cost of exploiting the other 99%, or being with the people in a favela, long live the favela. Notes 1. BEALL, J. Is SciELO a Publication Favela? Scholarly Open Access. 2015. Available from: http://scholarlyoa.com/2015/07/30/is-scielo-a-publicationfavela/ 2. VESSURI, H.; GUÉDON, J.; and CETTO, A. Excellence or quality? Impact of the current competition regime on science and scientific publishing in Latin America and its implications for development. Current Sociology. 2014, vol. 62 nº 5, pp. 647-665. DOI: 10.1177/0011392113512839

I don’t believe I need to comment on this. A day earlier (when I’d seen a draft version of the statement above, before it was edited), there was this:

The fenced-off ‘nice’ publication neighbourhoods of Jeffrey Beall Also on SciELO in Perspective, this time by Jan Velterop on August 1, 2015. Of course the SciELO blog has a CC-BY license, so I can quote most or all of Velterop’s comments legally. Which I will—most, but not quite all (and omitting three links). We know Beall mainly from his list of ‘predatory’ journals and publishers. That list shows not only a strong bias against open access, but also against any journal or publisher that is owned by or run by non-Anglo-AmerCites & Insights

icans or other non-Westerners. His list contains publishers and journals that have ‘American’ or ‘European’ in their titles, Beall nonetheless seems to associate quality primarily with American and European owned publishers. Cameron Neylon, a much more diplomatic person than I am, expressed it like this on Twitter: “There is [a] definite undercurrent of cultural bias/imperialism, sometimes shading to racism, around ideas of quality in schol[arly] comm[unication]s.” And it is not just about quality: by calling the publishers on his list ‘predatory’ he accuses them of unethical behaviour. I am sure there are unethical publishers in his list, but Beall’s definition of ‘predatory’ is peculiar, to say the least. The publishers and journals on his list are exclusively open access ones, as if unethical behaviour, taking payment—or requiring copyright transfer—from authors without offering much of value in return, is the province of open access only. Yet the commercial subscription publishers and their journals that do that are not on Beall’s list. Researchers who need to publish (“publish-or-perish”, remember) are in many ways a vulnerable lot, and getting them to submit their articles by luring them into the trap called ‘prestige journals’ for journals that are anything but, and simply published by a widely known publisher – stripping them of their copyright in the process – is as much ‘predatory’ behaviour as open access journals making them pay Article Processing Charges. The ‘nice neighbourhoods’ and ‘favelas’ in Beall’s world do differ in one important respect: exaggerated, and even false, claims of ‘prestige’ are non-existent on the SciELO platform, or at worst very rare; they are rife, on the other hand, in the ‘nice neighbourhoods’ of commercial platforms. One of the main criticisms in Bell’s post is the idea that the SciELO platform is doing a poor job of distributing the journals’ content or increasing their visibility. He is plain wrong on both counts, but he probably means that they don’t spend much on actively promoting the journals and their content. If that is what he means, he may have a point. However, neither do all but very few of the journals from his commercial ‘nice neighbourhood’. He may also well be right that “many North American scholars have never even heard of these meta-publishers or the journals they aggregate.” Again, that, too, is true for most commercially published journals. I am convinced that most professional and diligent academic librarians in North America—and elsewhere for that matter—are well aware of SciELO. Some—though very few, one hopes—may choose not to share that awareness with their patrons. That’s not SciELO’s fault, obviously. Academics worth their salt will look critically at Beall’s utterances, as they do with any information

December 2015

55

they come across, and then ignore his opinions. Those who do take him seriously and ignore SciELO are free to do so. It’s their right. And their loss.

Once again, I don’t have much to say. I’m extremely aware of SciELO thanks to the research I’ve been doing; I’d hope that most librarians are as well.

Defending Regional Excellence in Research or Why Beall is Wrong About SciELO Phill Jones is not known as an extremist in favor of OA at all costs—and the scholarly kitchen, where this appeared on August 10, 2015 is certainly not known as a hotbed of OA advocacy. Jones recounts the controversy briefly and notes: Some responses relied on arguments of unfairness and lack of transparency in Beall’s eponymous list, which isn’t really relevant here. It’s important to remember that Beall did not call SciELO predatory. I’m sure that he would concede that SciELO are a legitimate and reputable publisher. Instead, Beall contended that SciELO was a bad place for scholars to put their work. Here’s part of what he wrote: Many North American scholars have never even heard of these meta-publishers or the journals they aggregate. Their content is largely hidden, the neighborhood remote and unfamiliar. In other words, since SciELO isn’t as well known in the US and Canada as are other publishers, nobody’s going to read it and it’s a bad place to publish. That’s a pretty tough point of view to defend. To support it, Beall claims that SciELO isn’t indexed in Web of Science or Scopus. He conceded that you could find the work in Google Scholar, but contends that the search engine is, “poisoned by fringe science” and so scholars won’t find SciELO’s content that way. The message here is that publishing with nationally run or regional publishers is not in the best interests of local researchers. There’s a few levels of wrong going on, but even if you agree with the value judgements being made, Beall’s argument would be flawed simply because the supporting evidence is incorrect.

I’m less satisfied that Beall would “concede that SciELO are a legitimate and reputable publisher,” but Jones is right that, this time, it’s not about the lists. Jones discusses why “the supporting evidence is incorrect” and also why local and regional publishers are important for the health of local and regional research. It’s a good and informative discussion (I certainly learned things from it), which you should read in the original, and ends with this: As the Leiden Manifesto attests, in the field of informetrics, there is a consensus that local excellence should be preserved and encouraged but so far, many publishers and librarians haven’t entered that discussion. Cites & Insights

It’s going to be a difficult conversation to navigate as there are a lot of preconceptions and prejudices to be tackled. There is confusion about how to handle new entrants into the market (for the record, SciELO was founded in 1997) and what constitutes legitimate contribution as opposed to predation. The discussion surrounding emerging markets needs to mature significantly and a more careful and nuanced approach is needed. There is a real danger that the current tone in the discussion of predatory publishing could lead to a guilt by association of all publishers based in the nonEnglish speaking world and that would not only be entirely unfair, but damaging to the public good.

A long comment stream, worth reading.

Open Access in Latin America: a Paragon for the Rest of the World This short piece, published August 17, 2015 at The Winnower, has a long list of authors: Juan Pablo Alperin, Dominique Babini, Leslie Chan, Eve Gray, Jean-Claude Guédon, Heather Joseph, Eloy Rodrigues, Kathleen Shearer and Hebe Vessuri. It’s short enough to quote in full (and carries a CC BY license). There have been no reviews, an unfortunately common situation for The Winnower as a platform for post-publication peer review. Latin America is one of the world’s most progressive regions in terms of open access and adoption of sustainable, cooperative models for disseminating research; models that ensure that researchers and citizens have access to the results of research conducted in their region. SciELO is a remarkable decentralized publishing platform harboring over 1,200 peer-reviewed journals from fifteen countries located in four continents - South America. Central-North America, Europe and Africa. Redalyc, based in Mexico, is another extraordinary system hosting almost 1,000 journals from fourteen Latin American countries plus Spain and Portugal. Governments around the world spend billions of dollars on infrastructure to support research excellence; platforms such as SciELO and Redalyc are extensions of this much larger investments in research. They reflect an enlightened understanding in Latin America that the wide dissemination of and access to research results is as important as the research itself. The rest of the world would do well to take note. In a recent blog post, these two initiatives were discredited by Jeffrey Beall. In the post, Beall compared the two publishing platforms to favelas, resulting in a meanspirited insult to both favela dwellers on the one hand, and SciELO and Redalyc on the other. Rather than maligning these initiatives, they should be held up as examples of best practice for the rest of the world.

December 2015

56

Furthermore, just because some in North America do not know about SciELO and Redalyc does not render them irrelevant. This is an extremely elitist and narrow view of the world. Although these platforms may not be well known in some places, SciELO and Redalyc do raise the visibility and accessibility of the journals they host, particularly with their local communities. If these journals were published by the big commercial publishers, the vast majority of researchers in Latin America would simply not have access to the articles in those journals. What value is visibility, if people cannot access the articles? One of the United Nations Sustainable Development Goals, which were finalized on August 1, 2015, is to “Build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation”. Both Scielo and Redalyc are excellent exemplars of this type of infrastructure. These types of networked meta-publishers allow for central governance of policies, procedures and controls, but are intentionally decentralized to support the development of local capacity and infrastructure ensuring greater sustainability and alignment with local policies and priorities. What Beall advocates for, namely to let powerful foreign players come in and take over local capacity building, is exactly the opposite of what sustainable development is about. For these reasons, we believe that SciELO and Redalyc are very nice neighbourhoods indeed!

Conclusion Are there publishers with sketchy ethics? Certainly. (Are all those publishers OA? I doubt it.) Is it reasonable to condemn an entire publisher because (a) one or two articles in one of its journals seem flawed or (b) one or more of its journals has ethical issues? Well…if you take that perspective, you can get to a clearer position, such as the one I took in an October 20, 2015 post at Walt at Random—a post intended to be mildly humorous and my own contribution to Open Access Week, but a post that may not be far from the truth. That post closes out this megaroundup. Since it’s my own text, I’m not giving it as one long set of quoted paragraphs.

You’re a PPPPredator! You’re a PPPPredator! You’re ALL PPPPredators! I think I finally get it: what Jeffrey Beall is driving at, given his apparent standard that One Bad Article Condemns An Entire Publisher and his apparent plan to discredit each significant gold OA publisher for some reason, one at a time… Cites & Insights

Namely, he’s too narrow but he’s right—and I’m using “PPPPredator” for “potential, possible or probable predatory publisher.” I apologize for doubting him; I simply failed to realize the Oprahism in what he’s saying, once you remove the OA-only blinders: to wit, every publisher is a PPPPredator. No? Consider:  BMC is owned by Springer Nature, so now that Beall’s pointing out one possibly-defective article in one journal from BMC–but phrasing it as an attempt to discredit BMC in general, with the tagline “This is scholarly open-access publishing,” it only makes sense to conclude that Springer Nature is a PPPPredator.  Frontiers just had the honor of being added to Beall’s list because…well, because Beall Gets Complaints. (Frontiers is partly owned by the part-owner of Springer Nature, but we’ve already covered them.)  Beall’s made it clear that APC-charging journals inherently represent a conflict of interest (but apparently subscription journals with page charges don’t), and pretty much every major subscription journal publisher now has at least “hybrid” journals (it appears that a substantial majority of subscription journals from larger publishers now have “hybrid” options, at least based on Outsell’s reports), and most of them have APC-charging Gold OA journals, then Elsevier, Wiley, Taylor & Francis, SAGE, Oxford University Press, Cambridge University Press, the American Chemical Society, BMJ, RSC, IOP, American Institute of Physics…all PPPPredators. (With a bit of research, I could extend that list quite a bit…)  AAAS (publishers of Science)? Yep. Science Advances (even if the others aren’t “hybrid,” which I don’t know), so AAAS is a PPPPredator.  Hmm. That does leave gold OA publishers funded by means other than APCs, but since Beall’s already attempted to discredit SciELO and Redalyc for being “favelas,” I’m sure he has similar approaches at the ready for other serious non-APC gold publishers. So there it is. You’re a PPPPredator! You’re a PPPPredator! You’re all PPPPredators! (Now that I think of it, I don’t believe ALA has any APC-charging gold OA journals or hybrid journals, although it does have both non-APC gold OA and subscription journals…but ALA’s published my writing on open access (or my “bilge” as Beall termed it

December 2015

57

in one of his never-ad-hominem remarks, which he since deleted from the comment stream in which it appeared), so they must be a PPPPredator.)

Well, that’s a relief. Actually, it’s also true: any publisher is potentially a predatory publisher, especially when one man gets to determine what’s predatory. Pretty much every publisher will occasionally publish a “bad” paper, possibly one that some others think is “obviously” bad, possibly even one that’s plagiarized. Pretty much every publisher will have at least one journal where at some point the editorial board or peer review may involve issues (excessive publication, editorial overrides, etc.). I need to modify some previous conclusions. As far as I can tell, somewhere between 1.4 and 2.5 million papers were published last year by PPPPredatory publishers in the most general sense. If you want to avoid all PPPPredatory publishers….you’ll just have to self-publish or go directly to arXiv or some other archive. Or you could step back, take a deep breath, and look at journals using a little judgment and, for open access, the Directory of Open Access Journals, a whitelist that’s getting better and better. And maybe a little common sense. If you believe your already-paid-for scholarly research deserves the widest possible audience, there are thousands of serious gold OA journals available that don’t even charge author-side fees. (At least 6,383 of them published articles in 2014.) Oh, and since this is my own odd little contribution to Open Access Week, let me add: If you want to know more about the realities of serious gold OA publishing from 2011 through 2014, based on a 100% “sample” of what’s out there, I’ll recommend my book The Gold OA Landscape 2011-2014, available in paperback form or as a site-licensed non-DRM PDF ebook. Every library school should have a copy; so should every serious OA publisher, at the very least. So, IMNSHO, should every ARL library. A couple of non-footnotes:  What? You believe one fundamentally flawed journal is enough to discredit a publisher, even if one article isn’t? You might check into the publisher that continues to publish a “scientific” journal that presumes that water somehow has memory… Just a hint, the name begins with Els…  If you think I’m saying “All publishers are alike” or “There are no fundamentally defective journals” or “There are no publishers Cites & Insights

more interested in scamming money than in actual scholarship”–I’m not.  If you think I’m saying “Blacklists are fundamentally flawed, and any transparent blacklist would include every major publisher”–well, yes, I am.  And, of course, “pretty much every” does not at all mean “every,” just “many or most large and well-known.” Maybe it’s in the same truthspace as “all gold OA publishing involves APCs.” Maybe not. Full ethical admission: I cleaned up one small error in the post, which has also been corrected in the post itself.

Pay What You Wish Cites & Insights carries no advertising and has no sponsorship. It does have costs, both direct and indirect. If you find it valuable or interesting, you are invited to contribute toward its ongoing operation. The Paypal donation button (for which you can use Paypal or a credit card) is on the Cites & Insights home page. Thanks. If you’d like to see my research on the state of gold open access continue, this would be a very good time either to contribute to C&I, help find a source of funding, or buy a copy of The Gold OA Landscape 2011-2014, available in paperback form or as a sitelicensed non-DRM PDF ebook.

Masthead Cites & Insights: Crawford at Large, Volume 15, Number 11, Whole # 190, ISSN 1534-0937, a periodical of libraries, policy, technology and media, is written and produced, usually monthly, by Walt Crawford. Comments should be sent to [email protected]. Cites & Insights: Crawford at Large is copyright ©2015 by Walt Crawford: Some rights reserved. All original material in this work is licensed under the Creative Commons Attribution-NonCommercial License. To view a copy of this license, visit http://creativecommons.org/licenses/bync/1.0 or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.

URL: citesandinsights.info/civ15i11.pdf

December 2015

58