How to write research results when some study participants dropped out?
It is a very bad thing to remove people who dropped out from your data set. The problem is that you do not know whether dropping out is correlated with the effect that you are studying.
For an extreme example, consider a study on the effect of being shot at on soccer ability. In round 1, people play soccer, then they get shot at randomly with a gun that might or might not hit them, then they play round 2 of soccer, then they get shot at again, and then they play round 3 of soccer. Of course, anybody who actually gets hit when they are shot will probably drop out. If you eliminate those people, you will vastly underestimate how badly soccer players are affected by people shooting at them.
This may seem like a rather extreme example, but things very much like it happen frequently in biomedical or psychological studies, just with less obvious causal connections.
Report exactly what happened, and take the missing people into account when you are computing your effect size. If you need help on the technical aspects of that, you should ask on Cross-Validated.SE.
You should present demographics for every round available for accuracy, completeness, and your own personal sanity. Drop-outs and people lost to follow-ups, are still data points, especially in medical/psychological/sociological studies. They may not have any associated data, but they were recruited and participated at least during the initial phase of data collection (demographics), and not counting them can imply other things.
Anyways, I like using an example to show why it helps present a clearer image.
Let's say 100 animals sign up for a study at Round 1. The demographics are as follows: 50 dogs, 50 cats.
However, when Round 2 rolls around, 20 cats are nowhere to be found. The results are collected from the remaining subjects; 25 dogs are peanut butter lovers, and 15 cats are peanut butter lovers.
If you only say that 20 animals dropped out, the information presented here doesn't mean very much, since you don't know what animals dropped out. In actuality, both dogs and cats had a 50% split based on the population of data collected, but presenting information only partway can be misconstrued as perhaps it was 25/40 dogs and 15/40 cats, because you haven't provided any. In addition, neglecting to mention that you originally had 50 dogs and 50 cats and only presenting that you had 50 dogs and 30 cats in the final results could indicate selection bias or a lack of interest, as opposed to losing cats to follow-up exams.
So you would present in a nice table or summary:
During Round 1, 50 dogs and 50 cats were recruited for the peanut butter study. However, 20 of the original 50 cats (40.0%) dropped out before Round 2 testing and could not be replaced. During Round 2 testing, it was found that 25 of 50 dogs (50.0%) and 15 of 30 cats (50.0%) preferred peanut butter.
I fully agree with the other answers that you should do statistical analyses on your dropouts, and report and think about the results. Did people who dropped out differ significantly from participants that stayed on? For instance, more women may have dropped out, or more men, or the less successful in initial rounds. If so, you may have confounding effects like selection bias, which you should discuss. (Or you may already have your next research idea right there ;-)
As others write, don't just drop data. Data is precious. Use all you have.
The CONSORT group (which stands for "Consolidated Standards of Reporting Trials") has some materials. It also publishes a flowchart template (MS Word doc) that seems to be becoming the norm for reporting dropouts in the course of trials. I know of a few journals that require exactly this kind of flowchart for submission, which will usually end up in the online supplement of the article. I find such a structure enormously helpful, certainly more so than a free text description that one needs to wade through. I'd strongly recommend you include this kind of flowchart.