Do I have to adapt my reviews according to the quality of a journal?
When an article is technically wrong, describes things that have been done dozens of times elsewhere or is just awfully written, then I recommend rejection, no matter the journal. After that, journal guidelines (not the perceived or actual journal "quality") come into play:
- Match with the journal's scope
- Journal requirements on novelty of ideas (several journals are ok with articles that fill some gap in an otherwise well-understood field, but some very high-profile journals would like to see papers with significant novelty)
- Methodological requirements (e.g. some journals require that simulation results are validated against analytic results or measurement results)
So, there is some baseline, but afterwards it depends on the journal scope.
I suspect you'll get a variety of answers to this, ranging from idealistic to pragmatic. My own take leans more towards the idealistic end of things. A few thoughts:
- An article exists as an entity in its own right, and has a particular level of quality. (There may be some correlation between the quality of the article and the quality of the journal to which it gets submitted, but that just gives you a somewhat unreliable prior for the article's quality, it doesn't change the quality per se.)
- As a community, we have a vested interest in seeing correct articles of sufficient significance accepted, and incorrect/insignificant articles sent back for revision until they meet the appropriate threshold (or rejected, but only if the authors absolutely refuse to play ball).
- Inevitably, since there is a limit to the number of articles that journals (and conferences) can take per unit time, there will be a need to prioritise the publication of some correct, significant articles over others; indeed, some such articles are more important/better than others. However, this prioritisation process should be handled by the associate editor (or area chair, for conferences), based on the recommendations received from the reviewers. The reviewers are there to provide a venue-independent assessment of the quality of the manuscript, not to usurp the role of the associate editor/area chair. This is partly because they often won't know the constraints of the journal/conference as well as those individuals.
- There is a reasonable argument that on top of this, some mechanism should be put in place to make sure that articles that meet the threshold get published somehow, even if there isn't space for them in the venue because they are beaten to the punch by better articles. For example, the authors could be given the option of automatically resubmitting to another venue, forwarding the positive reviews from the first venue and perhaps avoiding the need for another review round. The community as a whole isn't served well if we outright reject articles that are good enough simply due to lack of space.
In addition to the above thoughts, one thing I would add is that I do tend to take the journal's author guidelines into account when reviewing papers - primarily because that allows me to make comments to the associate editor that are tailored to what the journal declares itself to be looking for (e.g. if it says that an extension paper must not include large chunks that have been copied verbatim from the conference version, then I might quote that in my review). However, I try to draw a distinction when reviewing papers between comments on the quality of a paper and comments on the extent to which it complies with such guidelines (e.g. "it's a well-written paper: just like the conference version from which the authors appear to have copied large chunks of text, in contravention of the journal guidelines").