How to review papers for conferences without comparing/contrasting each other?

The goal is to score the papers on their own merits, not to rank them. There are some conferences where there are ranking systems, usually among the scored papers and done by the committee, but just like journal papers, it's possible to evaluate a paper on its own merits.

For example, if there was a paper that was technically the "Best of a bad lot", but still contained massive numbers of errors, poor writing, unintelligible graphics, etc., knowing the context isn't necessarily helpful, they're all bad papers.

Looking at their scoring metric, each one of these can be described in manner consistent with not knowing other papers:

  • reject: This abstract isn't suitable for the venue, is just outright poor quality, etc. and is beyond salvaging.
  • weak reject: This abstract isn't pathologically flawed, but would need a considerable amount of work to be appropriate for the venue.
  • neutral: The "I suppose" category. Filler abstracts that could use a bit of polish, or ones that failed to rouse much of a strong feeling either way.
  • weak accept: Promising submissions that are appropriate for the venue. While not inherently flawed, there's probably improvements they could make to.
  • accept: "I would like to see this accepted"

It's possible for you to get two outstanding papers and simply both think they should be accepted. Or two papers that should be rejected flat out. And things in between. Trying to rank every submission to a conference would be a massive effort, and is still somewhat arbitrary, as your ranking scheme is not inherently an objective one. Nor are you necessarily guaranteed to be qualified to rank each and every submission.

You should take into account the nature of the conference itself - for example, there are some conferences where I am considerably more lenient than I am for others, the same way there are journals where I use somewhat stricter criteria. And yes, a strict reviewer could torpedo otherwise worthwhile papers, but that's why many groups use more than one reviewer, and the probability of any one reviewer being a sufficiently strict reviewer as to meaningfully start harming the overall peer review process is pretty small.


A questionnaire-based metric system would be a pretty good system for reviewers and is used by many top conferences. Each of the questions are answered with a 5 scale range. The following can be few of them

  1. Is the paper relevant to the conference?
  2. How innovative is the paper?
  3. How would you rate the technical quality of the paper?
  4. How is the presentation?
  5. Is the paper be of interest to the users and practitioners?
  6. What is your confidence in your review of this paper?
  7. Overall recommendation (in categories as mentioned by @Fomite)

The last few sections would be subjective in description.

  • Summary/overview of the paper
  • Strong points
  • Weak points

The final step would be to rank the list of papers as the the quantitative score and then eliminate contenders of the same score with the qualitative details.


I can't add a comment to Ébe's answer (SO wont let me). However, I disagree on point 2. A more useful question is "How valuable is the paper?" - Innovation tends to ignore fundamental contributions, and encourages poor quality work.

Other useful questions would be: "Has this been done elsewhere?" "How complete is the work?"

Also, usually conferences occur in series. You should be able to look at the papers from the last series ((n-1)th conference on xyz-science) to determine what the audience will have an expectation of - there is often significant overlap from year-to-year in attendees.