How do journal/conference reviewers make sure of the results' integrity for submitted research papers?

They don't. Reviewing papers is volunteer work, which has to be done besides the regular job. So no more than half a day is spent on reading the paper and writing the review. That is enough to filter out obvious scams. But for more sophisticated fraud we rely on people trying to use the results once published and finding out that it does not work. The threat of the subsequent sanctions (and for most, the internalized honor code) is hoped to prevent most fraud.


when I review papers, as an example, I have limited time between other tasks that must be done. I generally do 2 'read throughs', where I will go through the paper in about a couple of hours (depending on the length of the paper).

Asides from checking for consistencies between sections (e.g. are the results accurately mentioned in the abstract etc), I check:

  • that the method of how the data is obtained is clear
  • the data is clearly and descriptively displayed
  • evidence of some data validation (important in my field)

In software conferences, it is becoming a trend to have artifact evaluation sessions where produced data or software is provided to be evaluated according to the claims given in the submitted paper. While the process is currently optional, it does increase confidence in the claims that are made by the papers that had undergone such process.