What is the difference between partial and normal derivatives?
Some key things to remember about partial derivatives are:
- You need to have a function of one or more variables.
- You need to be very clear about what that function is.
- You can only take partial derivatives of that function with respect to each of the variables it is a function of.
So for your Example 1, $z = xa + x$, if what you mean by this to define $z$ as a function of two variables, $$z = f(x, a) = xa + x,$$ then $\frac{\partial z}{\partial x} = a + 1$ and $\frac{dz}{dx} = a + 1 + x\frac{da}{dx},$ as you surmised, though you could also have gotten that last result by considering $a$ as a function of $x$ and applying the Chain Rule.
But when we write something like $y = ax^2 + bx + c,$ and we say explicitly that $a$, $b$, and $c$ are (possibly arbitrary) constants, $y$ is really only a function of one variable: $$y = g(x) = ax^2 + bx + c.$$ Sure, you can say that $\frac{\partial y}{\partial x}$ is what happens when you vary $x$ while holding $a$, $b$, and $c$ constant, but that's about as meaningful as saying you vary $x$ while holding the number $3$ constant.
I suppose technically $\frac{\partial y}{\partial x}$ is defined even if $y$ is a single-variable function of $x$, but it would then just be $\frac{dy}{dx}$ (the ordinary derivative), and I can't remember seeing such a thing ever written as a partial derivative. It would not make it possible to do anything you cannot do with the ordinary derivative, and it might confuse people (who might try to guess what other variables $y$ is a function of).
The previous paragraph implies that the answer to your Example 3 is "yes." It also hints at why I almost wrote "a function of two or more variables" as part of the first requirement for using partial derivatives. Technically I think you only need a function of one or more variables, but you should want a function of at least two variables before you think about taking partial derivatives.
For Example 2, where we have $x^2 + y^2 = 1$, it is not obvious what the function is that we would be taking partial derivatives of. Either $x$ or $y$ could be a function of the other. (The function would be defined only over a limited domain, and would produce only some of the points that satisfy the equation, but it can still be useful to do some analysis under those conditions.) If you write something besides the equation to make it clear that (say) $y$ is a function of $x$, giving a sufficiently clear idea which of the possible functions of $x$ you mean, then I think technically you could write $\frac{\partial y}{\partial x}$, and you might even find that $\frac{\partial y}{\partial x} = 2x$, but again this is a lot of trouble and confusion to get a result you could get simply by using ordinary derivatives.
On the other hand, suppose we say that $$h(x,y) = x^2 + y^2 - 1,$$ and we are interested in the points that satisfy $x^2 + y^2 = 1$, that is, where $h(x,y) = 0$. Now we have a function of multiple variables, so we can do interesting things with partial derivatives, such as compute $\frac{\partial h}{\partial x}$ and $\frac{\partial h}{\partial y}$ and perhaps use these to look for trajectories in the $x,y$ plane along which $h$ is constant. OK, we don't really need partial derivatives to figure out that those trajectories will run along circular arcs, but we could have some other two-variable function where the answer is not so obvious.
I hope this answers your question.
The partial derivative notation is used to specify the derivative of a function of more than one variable with respect to one of its variables.
e.g. Let y be a function of 3 variables such that $y(s, t, r) = r^2 - srt$
$$\frac{\partial y}{\partial r} = 2r-st$$
$\frac{d}{dx}$ notation is used when the function to be differentiated is only of one variable e.g. $y(x) = x^2 \ \implies \frac{dy}{dx} = 2x$
I hope this clarifies it a bit for you.
So really, they both mean the same thing but one is used within the context of multivariable calculus whilst the other is reserved for univariate calculus.
The (calculus-of-variations) tag seems to be not the most popular one, so maybe it needs some more advertising (-:
- Intuition behind variational principle
Let there be given a curve $\vec{q}(t)$ and a real valued function $L$ with the following arguments:
this curve, the time derivative $\dot{\vec{q}}(t)$ of the curve and the time $t$ itself.
Minimize the following integral as a function/functional of the curve $\vec{q}(t)$: $$ W\left(\vec{q},\dot{\vec{q}}\right) = \int_{t_1}^{t_2} L\left(\vec{q},\dot{\vec{q}},t\right) dt = \mbox{minimum} $$ It is proved in the reference that the curve minimizing the integral $W$ is given by the following system of mixed partial-common differential equations, one for each of the coordinates $q_k(t)$ of the curve $\vec{q}(t)$: $$ \frac{\partial L}{\partial q_k} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}_k}\right) = 0 $$ These are the well known Euler-Lagrange equations. They are specified for the following problem: find all curves in the Euclidean plane for which the length $W$ between two given end-points is minimal. This makes $\vec{q} = (x,y)$ and $\dot{\vec{q}} = (\dot{x},\dot{y})$ in: $$ W = \int_{t_1}^{t_2} L(\dot{x},\dot{y}) dt = \mbox{minimal} \qquad \mbox{with} \quad L(\dot{x},\dot{y}) = \sqrt{\dot{x}^2 + \dot{y}^2} $$ Giving for the Euler-Lagrange equations: $$ \frac{\partial L}{\partial x} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{x}}\right) = 0 \\ \frac{\partial L}{\partial y} - \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{y}}\right) = 0 $$ Partial derivatives. Obviously: $$ \frac{\partial L}{\partial x} = \frac{\partial L}{\partial y} = 0 $$ Somewhat less obviously: $$ \frac{\partial \sqrt{\dot{x}^2 + \dot{y}^2}}{\partial \dot{x}} = \frac{\dot{x}}{\sqrt{\dot{x}^2 + \dot{y}^2}} \\ \frac{\partial \sqrt{\dot{x}^2 + \dot{y}^2}}{\partial \dot{y}} = \frac{\dot{y}}{\sqrt{\dot{x}^2 + \dot{y}^2}} $$ Common derivatives: $$ \frac{d}{dt} \frac{\dot{x}}{\sqrt{\dot{x}^2 + \dot{y}^2}} = \frac{ \ddot{x} \sqrt{\dot{x}^2 + \dot{y}^2} - \dot{x} \left( \dot{x} \ddot{x} + \dot{y} \ddot{y} \right) / \sqrt{\dot{x}^2 + \dot{y}^2}} {\left(\sqrt{\dot{x}^2 + \dot{y}^2}\right)^2} = \dot{y} \frac{\dot{y}\ddot{x} - \dot{x}\ddot{y}}{\left(\dot{x}^2 + \dot{y}^2\right)^{3/2}} = - \kappa \, \dot{y} \\ \frac{d}{dt} \frac{\dot{y}}{\sqrt{\dot{x}^2 + \dot{y}^2}} = \frac{ \ddot{y} \sqrt{\dot{x}^2 + \dot{y}^2} - \dot{y} \left( \dot{x} \ddot{x} + \dot{y} \ddot{y} \right) / \sqrt{\dot{x}^2 + \dot{y}^2}} {\left(\sqrt{\dot{x}^2 + \dot{y}^2}\right)^2} = \dot{x} \frac{\dot{x}\ddot{y} - \dot{y}\ddot{x}}{\left(\dot{x}^2 + \dot{y}^2\right)^{3/2}} = + \kappa \, \dot{x} $$ Where $\kappa$ is recognized as the curvature. The Euler-Lagrange equations thus say that $- \kappa\, \dot{x} = +\kappa \, \dot{y} = 0$ , with can only be fulfilled if $\kappa = 0$ : the curvature is zero.
Indeed, the shortest path between two points in the Euclidean plane is a straight line.