How are citations used to fairly compare researchers: fewer publications and fewer co-authors versus more publications with many co-authors
A general rule of bibliometrics is that they shouldn't be used to compare people (or projects, or papers, or research projects) across different fields, because publication culture can vary wildly even within disciplines and between closely allied fields. To begin with, the size of the field - the number of papers published per year, for example - has a direct impact on how many citations each paper gets.
Your quandary is an example of this. In certain fields such as high-energy physics, astronomy, or parts of biology, a lot of the science is concentrated in very large collaborations, which produce papers with many citations and many authors. It is indeed unfair to use citation counts to compare such a CV with, say, a mathematician's, since papers there tend to have few authors and, in many specialized fields, be read by very few people indeed, even for high-quality papers.
Whether such bibliometrics are used in practice by hiring committees - well, that obviously depends on the field, the institution, and the specific people involved. If all the applicants are from similar fields then this may not be a huge problem, but the numbers need to be treated with some distance to avoid the problem you point out. If a hiring or review process places a large emphasis on citation counts (or other bibliometrics) for applicants from different fields, then that is indeed a problem.
One final thing you should keep in mind is that applicants with a high-citation-count, large-collaboration paper in their CV are likely to get asked at interview questions like
So, what was your role in this collaboration?
in any case, as part of the interview process.
In biology these papers have become extremely common. These papers are often results of high-throughput data generation projects (e.g. genome sequencing projects). Since they generate a lot of data, their data is often used thus generating many citations (this is why journals like these papers - they are impact factor boosters).
However, I think this is not such a big problem.
In many cases where it is important for people to understand your contribution, there tend to be means of doing this. For example, some funding agencies may ask you to specify verbally or numerically what your contribution to each paper was. If these are the only papers you have, the relevant people will want to know what your exact role in the project was. You will almost always be able to explain or emphasize your role in a cover letter.
When it is less important for people to understand your exact contribution, I find that people will usually give a very low weight to such papers. I guess this comes from an underlying assumption that without prior knowledge we can assume that the amount of contribution is the inverse of the number of authors (maximum entropy?). That said, in biology the first and last authors have special status and I think this is also the case in these papers.
So the main problem, I think, is not getting more recognition then you deserve but actually less if you are some author in the middle of the list. However, as I mentioned, you will usually have some other venue to explain your exact contribution. The only way I could see these papers being very useful for a CV is as an indicator that you can get collaborations and funding (these projects are typically well-funded).
Where I work at a academic computing center, when hiring at the postdoc and research associate (junior researcher) level, I never look at citation counts. We are very different from an academic department, and citation counts aren't all that useful to us. We are looking for a certain skill set which includes good publications, but it also includes lots of other things.