Why do open access consortia affiliate themselves with questionable publishers
Why do these consortia affiliate themselves with questionable publishers?
Hindawi is a founding member of the OASPA and MDPI, whichever opinion we might have about the quality of its journals, is a major player in the OA business. The OASPA apparently conducted an internal investigation about MDPI and seem to be happy about the results. The question might be more: is the OASPA questionable?
how can the academic community pressure open access consortia to consider their members carefully?
I would recommend not to consider open access consortia as relevant. Then, not submitting papers, not serving on the editorial board, and refusing reviewing tasks for sketchy journals.
On a more general level, one thing we could do is reduce the demand for low-quality, pay-for-publish 'OA' journals by challenging the hiring policies based on publication volume in our local institutions.
The academic publishing world has a number of problems with inadequate or completely absent peer review. These problems are present at both open access and subscription journals. The recent statement from the STM Association also reflects this position that the problems are not just confined to open access journals.
The members of OASPA include a heck of a lot of publishers, from very small, to very large, including SpringerNature and Wiley. Elsevier would probably like to be a member but it doesn't meet the stringent membership criteria for OASPA yet.
Amongst publishers, OASPA membership is seen as a mark of quality. The Think.Check.Submit cross-industry initiative encourages researchers to check that if the journal is open access, if the journal is in DOAJ and if the publisher is an OASPA member -- marks of trust/quality. Both DOAJ and OASPA are selective organisations - they don't allow or list just any and every OA journal or publisher.
As @Aubrey's answer notes, DOAJ have done good work to weed-out questionable journals from DOAJ.
The same is also true of OASPA. After the famous Bohannon sting published in Science OASPA responded by investigating all three of the then (2013) OASPA publisher members that accepted Bohannon's sting article. Those three publishers were Hikari, Dove Medical Press, and SAGE. Incidentally, two Hindawi journals (Chemotherapy Research and Practice and ISRN Oncology) , one MDPI journal (Cancers) and one Frontiers journal (Frontiers in Pharmacology: Pharmacology of Anti-Cancer Drugs) tested in Bohannon's sting ALL rejected the sting article, 'passing' the test.
After these OASPA investigations, OASPA decided to terminate the OASPA memberships of Hikari and Dove Medical Press. Neither publisher have since returned to OASPA membership.
I do not think that DOAJ or OASPA affiliate themselves with questionable publishers, and whenever this has been pointed out to them, they have both taken appropriate, detailed investigations to weed-out questionable journals and publishers.
I guess it all depends on whom one considers a 'questionable publisher or journal'. I certainly do not consider MDPI, Hindawi or Frontiers to be 'predatory publishers' but must admit I would not and have not chosen to publish with Frontiers or Elsevier because of distasteful business practices.
Organisations like DOAJ and OASPA cannot simply bar new members from trying to join - it is not in their spirit (openness!). But they certainly do heavily vet new membership or listing applications. I don't know what else to say. I found the question to be a little leading tbh...
If I understand your question correctly, you're asking about why OASPA and DOAJ associate themselves with MDPI, Frontiers and Hindawi. Only OASPA and DOAJ will know for sure, but I'll venture this reason: MDPI, Frontiers and Hindawi aren't necessarily questionable.
First, something to remember about Beall's list: this started as the work of one person. That means it's easily biased. OA spans from the clearly disreputable on the one end to a very gray area on the other. Beall undoubtedly had good intentions, but if the Who's Afraid of Peer Review? sting meant anything, Beall was only 82% accurate. In the sciences, a theory that predicts the right result 82% of the time is good but not great; in particle physics we even need a 5 sigma result (p-value 1 in 3.5 million) to claim a detection. I'm not saying Beall was wrong about MDPI, Frontiers and Hindawi, but I will say that "because Beall said so" is not a sufficiently good reason to conclude ____ is predatory.
Now about each publisher:
MDPI: See Wikipedia for more information. You can see Beall's criticism of MDPI stems from several aspects, such as how MDPI's articles are lightly-reviewed, how MDPI uses email spam, and how MDPI listed Nobel laureate Mario Capecchi on an editorial board without his knowledge. However:
- Many OA journals do indeed review lightly. For example I once attended a talk by a Springer spokesperson who talked about a journal which reviews for correctness, not novelty (can't find the journal now, but PLOS ONE has the same policy). Viewed one way this is laudatory - it makes peer review less random by eliminating one completely subjective facet! Viewed another way, this is terrible - it makes it seem as though the journal will publish old results known for hundreds of years as long as the author is willing to pay. Which is closer to the truth? You'll have to come to your own conclusions.
- Email spam. Although everyone finds them annoying, what constitutes email spam isn't universally agreed on. If you receive an email from someone you don't know with "Dear Professor Strongbad, I saw your question on Academia.SE and find it interesting, would you like to write an editorial on predatory publishers for my journal" - would you call that spam? Some people would, others would not. Also, what exactly isn't email spam anyway? If you never emailed people you didn't know personally, you would never be able to expand a journal large enough to be self-sufficient.
- Finally the Mario Capecchi case was later shown to be the result of inaccurate communication by Capecchi's assistant.
Frontiers: again, see the Wikipedia article. You'll note that, similar to MDPI, there were established academics who defended Frontiers. Although the volume of allegations against Frontiers in the article is both larger and harder to justify if true, it's also the case that a Frontiers journal rejected John Bohannon's sting paper. OASPA and COPE both investigated Frontiers, and both decided that Frontiers meets their membership criteria.
Hindawi: once again see the Wikipedia article. I don't want to rehash everything I wrote about MDPI and Frontiers since a lot of it also applies to Hindawi, but I'll add a few specific things:
- Hindawi was one of the pioneers of OA. In 2007, they converted all their journals to OA - this was both 1) before OA really took off and 2) pioneering, since even today most big publishers don't use a complete OA model.
- Hindawi is big. With over 400 journals and tens of thousands of published articles a year, Hindawi is a big fish in the OA pond.
- A Hindawi journal also rejected John Bohannon's sting paper.
- Some of Beall's criticism of Hindawi apparently focused on how high its profit margins are (apparently higher than Elsevier's). This not only has no relation to the quality of Hindawi's editorial process, it's also the case that Hindawi's article processing charges are lower than average, and they're based in Egypt, which as a developing country has much lower labour costs than the Netherlands-based Elsevier. One could say that Egyptians are bad at publishing relative to the Dutch, but that's borderline racism.
tl; dr: it's not a given that any unbiased observer will conclude that these three publishers are disreputable. Accordingly, it shouldn't be surprising that some OA consortia are willing to count them as one of their members.