When to use Iterated Forcing
There's no criteria that requires to be satisfied in order to use iterated forcing. Not only that, but since iterating forcing is the same as taking a single forcing extension (using the iteration poset), the question sort of falls flat on itself.
Even worse, with the exception of a certain class of "minimal" generic extension, most (in some sense) forcing notions are in fact an iteration, since they can be decomposed to iteration of one or more subforcings. For example, adding a Cohen real can be thought of as adding two Cohen reals one after the other. And collapsing $\omega_1$ can be thought as first adding a Cohen real, then adding a branch to the Suslin tree added by that Cohen real, and then collapsing $\omega_1$.
So why do we even use iterated forcing?
Because it's convenient. Because it is easier to break down a large problem into smaller problems, and then deal with them, one at a time. When forcing Martin's Axiom, for example, it is easier to deal with the forcing notions one step at a time, rather than trying to somehow capture all of the existing ones, and the ones who would come, simultaneously.
Even worse. The iterative approach to Martin's Axiom is pure magic. Every limit step adds Cohen reals. Every Cohen real adds a Suslin tree. Martin's Axiom implies that there are no Suslin trees.
How does it happen? Well. Because of the very nature of the iteration, at each step we anticipate "a problem", and we solve it.
Other times we might want to construct an object via forcing, but our starting model would require to have certain objects which are not guaranteed to exist. Or perhaps the construction would require a certain degree of genericity over the model, so first adding something new to work with is a good thing. In those approaches we start with $V$, we extend it once with a preparation (which itself may or may not be an iteration, e.g. Martin's Axiom or indestructibility of large cardinals), and then perform one or two extensions to obtain a final model.
Yes, we can describe the whole thing as a single forcing poset. But why? It will offer no better result, and will only increase the difficulty when trying to describe your objects or reason as to why they have this or that property.
For this reason exactly it is sometimes convenient to think about a Cohen real as a subset of $\omega$, sometimes as a binary sequence in $2^\omega$, and sometimes as a general sequence in $\omega^\omega$. But sometimes it's easier to think about a single Cohen real as infinitely many different Cohen reals instead, exactly for that reason.
Maybe I can add a natural example to Asaf's great answer. In the case of establishing Borel's conjecture, iterated forcing naturally comes up in the construction. Borel's conjecture states that all strong measure zero sets of reals are countable (it is not important what that means). As it turns out, this statement is independent of ZFC. It is not too hard to construct a counterexample to it using CH. To show that BC is consistent, you need forcing. But how do you force BC? Richard Laver found a poset (now called Laver forcing) that is easy to describe and has the following property: If $V[g]$ is a generic extension of $V$ via Laver forcing and $A\in V$ is a set of reals which is strong measure zero in $V[g]$, then $A$ is countable (both in $V$ and $V[g]$). (Note that the evaluation of the statement "$A$ is strong measure zero" may change from $V$ to $V[g]$). Now you have made some progress, there are no "old" counterexamples to BC anymore. But there still might be new counterexamples! It is now clear that one should try to iterate this process to make the new counterexamples vanish in the limit. This turns out to work, however it is not enough to iterate this just $\omega$ or $\omega_1$-many times, it is important to do it $\omega_2$-many times (otherwise CH might still hold in the extension). Moreover one has to choose the support correctly, in this case it is countable support for technical reasons.