If the product of two non-zero square matrices is zero, then both factors must be singular.
As Thomas points out, your proof is fine, but if you want another way to look at it, consider the following:
Suppose $AB = 0$. What is the $j$-th column on either side of this equation? On the left, it is a linear combination of the columns $\{\mathbf a_j\}$ of $A$, with coefficients from the $j$-th column of $B$, and on the right is the 0 vector:
$$b_{1j}\mathbf a_1 + b_{2j} \mathbf a_2 + \cdots + b_{nj}\mathbf a_n = \mathbf 0$$
This is true for each $j$, and there must be at least one non-zero $b_{ij}$ coefficient, since $B\neq 0$, so the columns of $A$ are linearly dependent.
Similarly, we can ask what are the rows on each side of the equation? The $i$-th row is a linear combination of the rows of $B$, with coefficients from the $i$-th row of $A$. So you see that the rows of $B$ must be linearly dependent.
The argument that you gave does indeed work and proves the claim.
If you are not "philosophically" satisfied with it, you may look at it in this way. Recall that matrices can be identified to linear transformations (once bases are fixed) and that the product of matrices correspond to composition of transformations. Also, non singular matrices correspond to automorphisms.
Now you can translate the claim into the following. Suppose that you have linear transformations $f$ and $g$ such that the composition $$ V\stackrel{f}{\longrightarrow}V\stackrel{g}{\longrightarrow}V $$ is the $0$-map. Suppose that, say, $g$ is an automorphism. Then, if $f$ is not the $0$-map, the subspace $W={\rm im}(f)$ is not trivial. But now, since $g$ is an automorphism, we must have $g(W)=g\circ f(V)\neq\{0\}$ contradicting the assumption.
A symmetric argument deals with the case when $f$ is an automorphism. Thus we conclude that neither $f$, nor $g$ can be automorphisms.
Perhaps it would be helpful to think in terms of the associated linear transformations $T_A$, $T_B$, and $T_{AB}$. You perhaps know that $T_{AB}=T_A\circ T_B$. If $AB$ is zero, then $T_{AB}$ is the zero transformation that sends every vector to $\vec 0$. That means that $T_A\circ T_B$ must do the same thing. This means that $T_A$ has to ‘kill off’ every non-zero vector in range of $T_B$, if there are any.
Since $T_B$ isn’t the zero transformation, there must be at least one vector that it doesn’t kill off, so $T_A$ has to send at least one non-zero vector to $\vec 0$. But this means that it can’t be invertible: it ‘collapses’ two vectors together, and there’s no way to ‘un-collapse’ them.
On the other hand, $T_A$ isn’t the zero transformation either, so there’s at least one vector $v$ such that $T_A(v)\ne\vec 0$. If $T_B$ were invertible, that $v$ would have to be in the range of $T_B$, i.e., there would have to be some $u$ such that $v=T_B(u)$. But then $T_A\circ T_B$ wouldn’t kill off $u$, so it wouldn’t be the zero transformation after all. Thus, $T_B$ can’t be invertible either.