What is the difference between Uncertainty and Standard Deviation?
"Do physicists just use the word standard deviation to refer to uncertainty?" Often we assume that results of our measurements are normal distributed (we can argue that, if we don't know the reason for the deviation from the "real" value, then it is most likely due to many factors and if you have many arbitrarily distributed factors influencing a variable, then that variable follows the normal distribution - central limit theorem). Then we can use some measure of the width of the normal distribution as our uncertainty, e.g. the std-deviation. But of course you are basically free in choosing what you use, one sigma might be ok now, but often multiples of sigma are used. You might also know that whatever you are measuring is in fact not normal distributed, then you would have to choose some other measure of uncertainty. So when it comes to uncertainties there is no one-size-fits-all solution. However, Gaussian error propagation based on standard deviations is the go-to if there are no reasons against it and in that case uncertainty and some multiple of sigma would be the same thing.
Now to the question what values to put in for the sigmas. Let me mention, that $\sqrt{\frac{1}{n-1}\sum_i\left(x_i - \bar{x}\right)^2}$ is not the standard deviation but an estimator for the "real" standard deviation of the distribution, that itself has an uncertainty (if it were the real value of the standard deviation, that formula should give the same result for every sample). So "why don't we plug in the standard deviations of the distributions"? Because you might have a better guess for the standard deviation, than the estimator above.
"Wouldn't this mean that you could manipulated the standard deviation σ just by what values you choose for your uncertainties." Yes, you can. Usually you would have to describe in detail why you chose some measure of uncertainty and others might be critical of your choice and contest your results because of that.
The key difference between these equations is the nature of the error: While the first is used for systematic error, the second is used for random errors.
The first equation is the total derivative of a function $f=f(x,y)$ at the point $(x_0, y_0)$ $$ \tag1 df = df(x_0,y_0) = \frac{\partial f(x_0,y_0)}{\partial x} dx +\frac{\partial f(x_0,y_0)}{\partial y} dy $$ This is true for any function and any variable. Since systematic errors are unknown constants their variance is zero. However, eq. (1) tells us, how a "systematic offset" $dx$ generates a "systematic offset" $df$: The systematic errors $dx$ is weighted by the derivative$\frac{\partial f}{\partial x}$, because the severity of the error depends on how quick the function $f$ changes around the point $(x_0,y_0)$. That's why we use eq. (1) to estimate the systematic error.
In contrast, your second equation tells us how random variables $x$ and $y$ influence the response variable $f(x,y)$. By squaring both sides we get $$ \tag2 Var[f(x_0,y_0)] \approx \left(\frac{\partial f(x_0,y_0)}{\partial x} \right)^2Var[x] + \left(\frac{\partial f(x_0,y_0)}{\partial y} \right)^2Var[y] $$ where I use $\sigma_x^2 = Var[x]$. The variance of $x$ is non-zero, because if we try to set the input to $x_i=x_0$, we actual get $x_i=x_0 + \epsilon_i$, where $\epsilon_i$ is a random error. I hope this statements make it clear that $dx \ne \sigma_x$. Although both are "uncertainties", systematic and random errors are fundamentally different. Sidemark: The confusion regarding the words uncertainty and standard deviation is understandable, because people often use them as synonyms. However, historically there exists other "conventions". Thus, I strongly recommend that you do not use the word "uncertainty" unless you have either previously defined it, or use it only in a qualitative (non-quantitative) fashion.
How do we estimate the variance $Var[f(x,y)]$ in eq. (2)? Let's consider a simple example, where we have only a single random input variable $x$ (no second input $y$). Thus, we have several options
- We set $x_i = x_i^{(target)}$ and remeasure the response $f(x_i)$ without changing the target value $x_i^{(target)} = x_0 = const$. We know that the input variable is fluctuating according to $x_i = x_0 + \epsilon_i$. Hence, by measuring the response variable several times we obtain an estimate of $Var[f(x_0)] = \frac{1}{n-1}\sum_{i=1}^n (f_i - \bar f)^2$. Although we have no way of determining $Var[x_i]$, we obtained an estimate of $Var[f(x_0)]$ without using error propagation. Note that the systematic error is not included in $Var[f(x)]$.
- We set $x_i=x_i^{(target)}$ and change the target values $x_i^{(target)}$. The so called residuals $r(x_i)=f(x_i) - f(\bar x)$ are the random error $\epsilon_f$. Thus, $Var[f(x_i)] = Var[r(x_i)]$ provides an estimate of the variance of the response variable.
- We can check the manual of our measurement equipment and use its precision as estimate of $Var[f(x_i)]$. There are fancy ways to obtain a more accurate estimate -- assuming a probability distribution, from which the random error is sampled -- however, this goes beyond your question.
- We can guess a random error $\sigma_x$ and use the error propagation formula, eq. (2), to check how the result is influenced. This is certainly the least objective method.