The use of limit in Titchmarsh's book "The theory of the Riemann zeta-function" in Theorem $3.13$
Let $\epsilon >0$; by the above (standard stuff you seem to agree with), there is a large enough half-integer $x(\epsilon)$ (depending on $s=1+it, \epsilon$), s.t. $|\sum\limits_{n<x} \frac{\mu(n)}{n^s}-\frac{1}{\zeta(s)}| < \epsilon$ for any other half-integer $x \geq x(\epsilon)$ - as usual there is no restriction in using half-integers (i.e. $odd/2$) beacause the Dirichlet sum is constant between integers and the jump goes to zero since $s=1+it$
(choosing in Perron, $c=\frac{1}{\log x}, \log T= (\log x)^{\frac{1}{10}}, \delta = A(\log T)^{-9}=A(\log x)^{-\frac{9}{10}},x$, hence $T$ large enough, $A$ positive absolute constant coming from zero-free regions of $\zeta$)
But then the relation $|\sum\limits_{n<x} \frac{\mu(n)}{n^s}-\frac{1}{\zeta(s)}| < \epsilon$ for $x>x(\epsilon)$ is precisely what is needed to conclude that the limit as $x$ goes to $\infty$ of $\sum\limits_{n<x} \frac{\mu(n)}{n^s}$ is precisely $\frac{1}{\zeta(s)}$, by the usual definition of limit.
So the point is that you apply Perron with large but finite $x$ and then in "shorthand" you let $x$ go to $\infty$ meaning the above $\epsilon - x(\epsilon)$ limit relation, which has nothing to do with making $x$ infinite as a number
To see the idea of the proof, let $M(x) = \sum_{n \le x} \mu(n)$ and we'll look instead at $f(x) = \int_1^x M(y)dy$ to obtain absolutely convergent integrals.
Then Titchmarsh shows a lower bound $|\zeta(1+it)|\ge B/\log(2+| t|)$ plus an upper bound $|\zeta'(s)|\le B\log(2+| t|)$ to obtain some bound $|\zeta(s)|>1/(A\log(2+|t|)) ,|\frac1{\zeta(s)}|<\log(2+|t|)$ for $s=\sigma+it, \sigma\ge 1-\frac{1}{A\log^2(2+ |t|)}$.
And this is what we need to conclude $$f(x) = \frac{1}{2i\pi} \int_{2-i\infty}^{2+i\infty} \frac{1}{\zeta(s)} \frac{x^{s+1}}{s(s+1)}ds= \frac{1}{2i\pi} \int_{\Re(s) = 1-\frac{1}{A\log^2 (2+|\Im(s)|)}} \frac{1}{\zeta(s)} \frac{x^{s+1}}{s(s+1)}ds \\= O(\int_{-\infty}^\infty \frac{\log (2+|t|)}{1+t^2 }x^{2-\frac{1}{A \log^2(1+ |t|)}}dt)\\ = O(\int_0^T \frac{x^{2-\frac{1}{A \log^2(1+ |T|)}}\log (2+|t|)}{1+t^2 }dt)+O(\int_T^\infty \frac{x^{2}\log T}{1+t^2 }dt)\\ =O(x^{2-\frac{1}{A \log^2(1+ |T|)}}) + O(\frac{x^2 \log T}{T})\\=O(x^{2-\frac{1}{A \log^2(1+ e^{\log^{1/4} x})}}) +O(\frac{x^2\log^{1/4} x}{e^{\log^{1/4} x}})=O(\frac{x^2}{e^{\log^{1/8} x}})=o(x)$$
If $M(x)> cx$ infinitely often, as $M(x+y)\ge M(x)-y$ then $|f(x+cx/2)-f(x) | \ge \sum_{n=0}^{cx/2} (cx-n) \ge x^2 c^2/8$ infinitely often, contradicting that $f(x)=o(x^2)$. Thus we proved $M(x)=o(x)$, the PNT.