How common is it for a paper to be wrong?

I'll try to give a general answer from a non-CS perspective.

tl; dr: yes, there are errors out there. A lot of errors, clerical and not, even in oft-cited papers and books, from any field. It's inevitable: though they do their best to avoid errors, authors are human after all, and reviewers are humans too (I know, you never find a damn robot when you need one). Thus, whenever you read a paper, maintain critical thinking.


EXAMPLES

I'll start the too long section with an anecdote. When I was working at my master's thesis, some twenty years ago, I needed a result published in a much cited paper from a renowned author in the field of electromagnetics. At the time, (almost) young and inexperienced, I thought that papers were always absolutely right, especially when written by recognized authorities. To practice the technique of the paper, I decided to rederive the results: after a week spent redoing the calculations over and over again, I couldn't find the same final equation. I was able to discover the correct equation – the one I was finding – in a book published later by the same author. Indeed, it was a clerical error that absolutely didn't change anything in the paper, but it was annoying and taught me an important lesson: papers and books contain errors. And, of course, I later published papers with mistakes in equations (not for revenge!) [*].

After that first experience, I've discovered that you can find more fundamental errors, even in well known books and papers. I'll give you here a few examples, taken from different fields, to underline how broad the phenomenon is (in bold, the mistaken claim; within parentheses, the field):

  1. (Classical mechanics) In Newtonian mechanics, the correct equation of motion in case of variable mass is F = dp/dt. This statement can be found in many classical books about newtonian mechanics, but it is plainly wrong, because that equation, when the mass is variable, is not invariant under Galilean transformations as it is expected in Newtonian mechanics (actually, the concept of variable mass in Newtonian mechanics can be misleading if not properly handled). For a deeper discussion see, e.g., Plastino (1990), Pinheiro (2004) and Spivak's book Physics for Mathematicians, Mechanics I. As a curiosity, that wrong equation is used by L. O. Chua in this speech (14:50 min) as an example to introduce the memristor.
  2. (Circuit analysis) Superposition can't be applied directly to controlled sources. It was just a few years ago when I came across this statement for the first time, and I was stunned: hey, I've applied superposition to controlled sources since I was in high school, and I've always get the right result. How it possibly can't be used? In fact, it can be applied, the important thing is to apply it correctly, but there are really many professors (I have several examples from Italy and US) who don't understand this point and fail to notice that the proofs of several theorems in circuit analysis are actually based on the applicability of superposition to controlled sources. For more on this, see e.g. Damper (2010), Marshall Leach (2009) and Rathore et al. (2012).
  3. (Thermodynamics) The Seebeck effect is a consequence of the contact potential. This false statement can be frequently read in technical books and application notes about thermocouples.

À propos of my own errors, a couple of weeks after having written this answer I discovered an error in an equation of a published conference paper which I co-authored. A fraction that should have been something like -A/B became B/A. Hey – I told one of the other authors – how could we possibly have written this? And how did it get past the reviewers? The fact is, that that equation was associated to a simple, well-known, example given in the introduction, an example so simple that probably neither us authors nor the reviewers gave a second look at the equation (of course, how can anyone write this wrongly?). I feel that many clerical errors like this one happen because of last-minute changes to notation: you have almost finished the paper and you realize that you could have employed a better notation... so, let's change it on the fly! And here is where certain errors sneak-in. Avoid last-minute changes, if you can.


TL;DR: The number is probably a double digit percentage.

I made a outlier detection algorithm for neuroscience data extracted from neuroscience journal articles. It is detailed in "Modeling of activation data in the BrainMap(TM): Detection of outliers" http://onlinelibrary.wiley.com/doi/10.1002/hbm.10012/abstract The redundancy in coordinates and text allow me to catch 'strange' data, some of which are typos in the original article. I have not made a statistics on the number of articles with errors but perhaps 1% or more have the issue. Note here that the typos are rather minor (e.g., a sign error in a single coordinate among many other reported numbers). It does not affect the overall conclusion. (For the interested: Results for my database available here: http://neuro.compute.dtu.dk/services/brededatabase/index_lobaranatomy_novelty.html)

Within the medical domain John Ioannidis has made a number of studies for estimating errors in claims in articles. The famous "Why Most Published Research Findings Are False" http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124 gives a theoretical estimate where the assumptions are probably not entirely correct, but following his argument there might be a double digit number of percentage "false". In "Contradicted and initially stronger effects in highly cited clinical research" https://jama.jamanetwork.com/article.aspx?articleid=201218 he found that in 16%-32% of the cases with highly cited original clinical research studies their claims were contradicted by subsequent studies.

Peter C. Gøtzsche in "Data Extraction Errors in Meta-analyses That Use Standardized Mean Differences" found discrepancies in 37% of meta-analyses. Ironically there was a comment for the Gøtzsche-paper pointing out a discrepancy in that paper.

These examples are perhaps not so relevant for computer science. I do think that typos occur now and then. I recently found what I believe were typos in equations in applied computer science articles. The typos does probably not affect the results. I would say - generally - that errors in computer science articles are not necessarily rare.

Update 28 August 2015:

There has just been published a description of a large series of replications of psychology experiments, see http://www.sciencemag.org/content/349/6251/aac4716.abstract

Among its reported results are: "Ninety-seven percent of original studies had significant results (P < .05). Thirty-six percent of replications had significant results".


Massimo Ortolano gave many non-CS examples, so I give a CS one: in the paper of Alan Turing that gave birth to Computer Science, there were many errors in the proof.

However, in my opinion, although there are many errors in papers, these errors are only in the small details. For papers published in well-known conference, their main idea are very unlikely to be wrong.

As you mentioned that the statement about complexity is only one line, obviously without proof, this is not the main focus of the paper. I will not be surprised if there is error in such a small detail.

If I were you I would try to prove what you are thinking, this will help you to understand the paper deeply. And if it is actually wrong, you can notify the authors or publish your proof if the error is important enough.