How to deal with zero uncertainties?
Use the second derivative (or third, or whatever). The reason we use that formula is that
$$ df \approx \frac{df}{dx} dx $$
is the first order Taylor approximation to df. If the first order term vanishes, you should include higher terms:
$$ df \approx \frac{df}{dx} dx+\frac{1}{2}\frac{d^2f}{dx^2} dx^2+... $$
In your case, with $f=x^2$, and $x=0$, we'd have
$$ df \approx dx^2 $$
This is a situation where naive error propagation breaks down. Those methods (i.e. giving uncertainty for $f(\mathbf{x})$ for some values $\mathbf{x} \pm \Delta \mathbf{x}$) are based on linear approximation, which fails for $f(x) = x^2$ near $x = 0$.
If you're not too worried about statistics issues, you can use the 'min-max' technique: your error bars on $f$ will be the minimum and maximum values you can get using values in the range $[x-\Delta x, x + \Delta x]$. In your situation with $x = 0$, this would be $f \in [0, (\Delta x)^2]$. This is nice because if you're (say) 95% confident your true $x$ is in $[x - \Delta x, x + \Delta x]$, then you're at least 95% confident that the true $f$ is captured too.
On a more rigorous level, the problem is that, in most elementary physics experiments, all errors are assumed to be Gaussian. (Error propagation using linear approximations preserves this property.) But when you do something nonlinear like this, the resulting error distribution in $f$ isn't even close to Gaussian. There are several sensible things to do, and you should ask your professor which is appropriate.