Convergence of infinite products flaw in this derivation of Euler product formula?
Your intuition is right that there is a flaw in step 3. You propose that any infinite product in which the terms tend to $1$ must converge; but this is false. The easiest example is the telescoping product $$ \prod_{j=1}^J \bigg( \frac{j+1}j \bigg) = J+1, $$ which show that $\prod_{j=1}^\infty \frac{j+1}j$ diverges to $\infty$. More generally, the convergence of the product $\prod_j (1+x_j)$ is by definition equivalent to the convergence of its logarithm $\sum_j \log(1+x_j)$, and in many circumstances this is further equivalent to the convergence of the logarithm's linear approximation $\sum_j x_j$; the example above is the case $x_j = \frac1j$. This is where we need the assumption $s>1$.
There is also a flaw (a very common one, in my experience) in step 4. You are correct that each of the terms on the right-hand side appears when we expand the left-hand side. However, there are literally uncountably many other terms as well. What about, for example, the term in the expansion where we select $\frac1{p_i^1}$ from every factor? What happens to a term like $\frac1{p_1p_2\cdots}$? And all the others where infinitely many of the factors don't equal $1$? In my opinion, it's not very easy to repair this flaw without starting over and using a more rigorous real-analysis proof.
Step 3 is where a "big leap" first occurs in the logic. Putting the variable $s$ back into the exponent, it is an interesting issue that the geometric series expansion $$ \frac{1}{1-1/p^s} = 1 + \frac{1}{p^s} + \frac{1}{p^{2s}} + \frac{1}{p^{3s}} + \cdots = \sum_{k \geq 0} \frac{1}{p^{ks}} $$ is completely valid when $s > 0$ (or ${\rm Re}(s) > 0$ if you want to use complex variables) and also when multiplying together finitely many of these equations, but multiplying such equations together over all $p$ is invalid when $0 < s \leq 1$.
It's hard to pin down an intuitive reason why something goes wrong in the passage from the finite to the infinite when $0 < s \leq 1$ but not when $s > 1$. You can't say "well, after some definite finite number of steps there's a problem". The issue is entirely about what's going on with the full infinite product. It's sort of like the number $\pi = 3.1415926535\ldots$ being irrational but it is a limit of truncated decimals $3$, $3.1$, $3.14$, and so on and at each finite stage the number is rational but the limit is not. There is no definite point at which the terms stop being rational. That property is true at the limit but not at an earlier step. Or an infinite series of continuous functions can be discontinuous (Fourier series). Limits are subtle.
The only mathematically acceptable way to "explain" why we need $s > 1$ is to bring in a serious theorem that justifies when infinite products can be expanded and rearranged arbitrarily to get a legal infinite series. Here is such a theorem.
Theorem. Let $\{z_n\}$ be real or complex numbers such that (i) $|z_n| \leq 1 - \varepsilon$ for some positive $\varepsilon$ (which is independent of $n$) and (ii) $\sum |z_n|$ converges.
a$)$ The infinite product $\prod_{n \geq 1} \frac{1}{1-z_n} := \lim_{N \rightarrow \infty} \prod_{n=1}^{N} \frac{1}{1-z_n}$ converges to a nonzero number.
b$)$ Every rearrangement of factors of the product converges to the same value.
c$)$ The product $\prod_{n \geq 1} \frac{1}{1-z_n} = \prod_{n \geq 1} (1 + z_n + z_n^2 + \dots)$ has a series expansion by collecting terms in the expected manner: $$ \prod_{n \geq 1} \frac{1}{1 - z_n} = 1 + \sum_{r \geq 1}\sum_{\substack{k_1,\dots,k_r \geq 1 \\ 1 < i_1 < \dots < i_r}} z_{i_1}^{k_1}\cdots z_{i_r}^{k_r}, $$ and this series is absolutely convergent.
Applying this with $z_n = 1/p_n^s$, where $p_n$ is the $n$th prime and $s > 0$, what we need in this theorem to justify the expansion of the Euler product into the Dirichlet series for the zeta-function is (i) $1/p_n^s \leq 1-\varepsilon$ for all $n$ and (ii) $\sum_{n \geq 1} 1/p_n^s < \infty$. There is no issue with the first condition regardless of the value of $s$, since $1/p_n^s \leq 1/2^s < 1$ for each $s > 0$: we can use $1-\varepsilon = 1/2^s$. But for the second condition we need $\sum_{n \geq 1} 1/p_n^s < \infty$. That is true for $s > 1$ but not for $0 < s \leq 1$, and this is how the theorem uses $s > 1$. But that is not intuitive and I do not think such convergence criteria from analysis are intuitive for inexperienced students. It is the actual nuts and bolts of the proof of the theorem that shows how the hypotheses lead to the conclusion here.
My advice is to point out there is a subtlety at $s = 1$, which makes it plausible that for $s < 1$ things are also bad (because $1/p^s > 1/p$ when $0 < s < 1$) and then just say mathematicians can justify that the the result of informal handwavy calculations is okay for $s > 1$. Make some reference to the integral test if the students have seen that already: $\int_1^\infty dx/x^s$ is convergent for $s > 1$ but not for $0 < s \leq 1$ even though $\int_1^b dx/x^s$ is meaningful for all $s > 0$ when $b$ is finite.
You are correct that step 3 is a problem. (This is not to say that steps >3 are not also problems.)
You suggested that if the factors tend to 1 then it will be okay, but that is not valid. For instance, $$ \prod e^{1/n}=e^{\sum 1/n}=e^\infty=\infty $$