Find all matrices $A\in \mathbb{R}^{2\times2}$ such that $A^2=\bf{0}$
Is this a correct proof?
It's good, but some might quibble with the third case. You require "and $A$ is not invertible" in your summary, which doesn't explicitly put conditions on $a,b,c$.
What is the standard proof?
I'm not sure I can say "standard", but what first comes to mind for me is to know about eigenvalues and Jordan canonical forms. Since $A^2=0$, then $A$'s only eigenvalue is $0$. Then either:
$A\sim\begin{bmatrix}0&0\\0&0\end{bmatrix}$. But that more or less immediately means $A=\begin{bmatrix}0&0\\0&0\end{bmatrix}$.
$A\sim\begin{bmatrix}0&1\\0&0\end{bmatrix}$. This means for some invertible $\begin{bmatrix}a&b\\c&d\end{bmatrix}$ you have: $$A=\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}0&1\\0&0\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}^{-1}=\frac{1}{ad-bc}\begin{bmatrix}-ac&a^2\\-c^2&ac\end{bmatrix}$$ Simplifying the presentation, there is some nonzero $k$ such that $$A=k\begin{bmatrix}-ac&a^2\\-c^2&ac\end{bmatrix}$$ (And allowing $k=0$ actually covers the first case.)
So now we know $A=k\begin{bmatrix}-ac&a^2\\-c^2&ac\end{bmatrix}$ for some real numbers $a,c,k$. Let's view the Cartesian $(a,c)$ in polar coordinates $(r;t)$. Then we know that $A=k\begin{bmatrix}-r^2\cos(t)\sin(t)&r^2\cos^2(t)\\-r^2\sin^2(t)&r^2\cos(t)\sin(t)\end{bmatrix}$. And we can absorb the $r^2$ into $k$, and write $$ \begin{align} A&=k\begin{bmatrix}-\cos(t)\sin(t)&\cos^2(t)\\-\sin^2(t)&\cos(t)\sin(t)\end{bmatrix}\\ &=\frac{k}2\begin{bmatrix}-\sin(2t)&1+\cos(2t)\\\cos(2t)-1&\sin(2t)\end{bmatrix}\\ &=q\begin{bmatrix}-\sin(s)&1+\cos(s)\\\cos(s)-1&\sin(s)\end{bmatrix} \end{align}$$
Conversely, if $A$ is of this form then it is easy to directly show that $A^2=0$.
This form shows that the "shape" of the collection of matrices where $A^2=0$ is a cone. Choose some $q$ in $\mathbb{R}$. Choose some $s$ in $S^1$. Then you have your $A$. The correspondence is one-to-one, except when $k=0$, $t$ is irrelevant.
How can I know if there was only the 8 cases?
Having 8 cases was particular to your approach. So I'm not sure how to answer the question. My approach has two cases. Someone else's might have 16 cases. But your approach had 8 cases, and under that approach there aren't any more because you exhausted the logical options you focused on.
Your proof is correct, although rather naive---in the sense that you've converted the matrix equation into a system of simultaneous equations and solved it. There's no "standard" proof per se, but there are many more conceptual ways to prove it.
One example is the eigenvalue argument provided by Bungo in the comments. Another way is to note that if $A^2=0$, then the minimal polynomial of $A$ must be $x^2$ (except the trivial case $A=0$), and since the minimal polynomial of a matrix divides its characteristic polynomial, then the characteristic polynomial of $A$ must be $kx^2$ for a constant $k$. It is easy to deduce all the possible forms of $A$ from here on.
The advantage of these more "high-level" arguments is that they generalise easily, to e.g. higher dimensional vector spaces.