Why are papers without code but with results accepted?
For me it seems that the reasons are two:
- the belief that code is only a tool, a particular implementation being secondary to the idea or algorithm,
- the historical residue (it was unpractical to print a lot of pages; especially as no-one could copy-paste it).
Additionally:
- many scientist seems to be afraid to show their code in public, as they are aware of its poor quality (see also Why do many talented scientists write horrible software?), so they don't want to risk reputation (for both quality and catching possible errors).
Moreover, things related to current incentives in academia (where publications, not code, are related to one's career possibilities):
- sharing code may mean risk of being scooped (instead of milking the same code for years),
- cleaning up code takes time, which can be used for writing another publications.
Do people have to submit their code privately to the reviewers at least, so that they can reproduce the experiment if possible.
Typically - not. If the code is not made public, almost for sure no reviewer have checked its existence; much less - correctness.
However, many scientists are starting to notice the problem (and they see how open source culture flourishes). So, there are new initiatives addressing such issue, like Science Code Manifesto:
Software is a cornerstone of science. Without software, twenty-first century science would be impossible. Without better software, science cannot progress.
Or e.g. this manifesto. Try to search for reproducible research or look at things such as knitr for R and this intro to IPython Notebook, or read about using GitHub for science. And it seems it is taking off.
What field are you talking about? A CS paper describing the design and performance of a computer vision algorithm is different from a sociology paper that used a spreadsheet to crunch demographic data.
Do most journals / conferences just "trust" that people who submit the paper really implemented the theory and got those exact results?
Yes. The presumption is always that there is no scientific fraud involved.
I always had this idea that any experiment should be reproducable by others else it's not scientific justified.
If the algorithms are fully described in the paper, then the result is reproducible. To reproduce it, you have to reimplement the algorithm.
I just started reading some papers and thought now let's look at the code and was quite astonished that most of the papers don't have any code to look at, while claiming some performance or being better than other papers.
Presumably the better performance is because the algorithm described in the paper is a more efficient algorithm. For example, when sorting a large amount of data, a quicksort is a better sorting algorithm than a bubble sort. The quicksort has O(n log n) performance on the average, while the bubble sort has O(n^2), and this is true regardless of the details of the implementation.
I think an issue that is related to that raised by Piotr (+1) which is that research funding is not generally available to cover the costs of producing highly reliable portable code or the costs of maintaining/supporting code produced to "research quality". I have found this to be a significant issue when trying to use code released by other researchers in my field; all too often I can't get their code to work because it uses some third party library that is no longer available, or that only works on a Windows PC, or which no longer works on my version of the software because it uses some deprecated feature of the language/environment. The only way to get around this is to re-implement the routines from the third party library so that all of the code is provided as a single monolithic program. But who has the time to do that in an underfunded "publish or perish" environment?
If society wants high quality code to accompany every paper, then society needs to make funds available so that good software engineers can write it and maintain it. I agree this would be a good thing, but it doesn't come at zero cost.