Why do we need to prove a fraction can always be written in lowest terms?
In the proof of irrationality of $\sqrt{2}$, which the section you're referencing plainly uses as an example, it is necessary:
Suppose $a$ and $b$ are integers such that $\frac{a}{b}=\sqrt{2}$.
Then $a^2=2b^2$.
Since $2$ is prime, $2$ divides $a$, say $a=2c$.
Going back to $a^2=2b^2$, we now have $4c^2=2b^2$, and then $2c^2=b^2$. Since $2$ divides $b^2$, $2$ divides $b$.
At this point in what has been written, there is no problem.
However, if you had additionally assumed that $a$ and $b$ have no common prime divisors, you would have reached a contradiction.
Otherwise, this argument does not get anywhere:
say $b=2d$. Then $2c^2=4d^2\implies c^2=2d^2\implies 2|c$
say $c=2e$. Then $4e=2d^2\implies 2e^2=d^2\implies 2|d$
say $d=2f$. Then $2e^2=4f^2\implies e^2=2f^2\implies 2|e$
...
It does require rigorous proof that fractions can be written in lowest form, (i.e. with coprime numerator and denominator), because this property is not always true for other types of numbers. Though a proof is obvious in the classical integer case, it can fail for fractions formed from other numbers. The classical proof depends crucially on the fact that $\,\Bbb N\,$ is well-ordered, so continually cancelling common factors must eventually terminate with a fraction in lowest terms (else the cancellations would yield an infinite decreasing sequence of denominators, contra $\,\Bbb N\,$ is well-ordered.
For other types of numbers there may exist $\,a,b,c\,$ where $\, c^k\,$ divides $\,a\,$ and $\,b\,$ for all $\,k\ge 0.\,$ Here the above proof breaks down for $\,a/b,\,$ since no matter how many times we cancel $\,c\,$ from the fraction, there will always remain a common factor of $\,c.$
In domains like elementary number theory where we have very strong intuition based on experience, it is especially important to be extra careful not to confuse empirical inference with logical inference. This occurred many times in the past.
For example, for many centuries no one noticed that uniqueness of prime factorizations required proof. Apparently either no one conceived of the possibility of nonuniqueness (or those who did thought that the proof was so "obvious" that it did not deserve mention). This was not corrected until $1801$ when Gauss plugged this gaping logical gap in his book Disquisitiones Arithmeticae, where he wrote "It is clear from elementary considerations that any composite number can be resolved into prime factors, but it is often wrongly taken for granted that this cannot be done in several different ways".
But even decades later one still finds mistakes made around unique factorization, even by leading number theorists. For example, circa $1850$ a few eminent mathematicians mistakenly thought they had proved FLT by erroneously assuming statements (e.g. Bezout arguments) that they did not realize were equivalent to unique factorization (which generally fails in the rings of cyclotomic integers studied). Even a century later rigor still was lacking in some expositions, e.g. Harold Davenport wrote that some British schoolbooks deemed uniqueness of prime factorization to be a "law of thought".