Difference between one-variable calculus and multi-variable calculus?
There's essentially no difference in the concept of the derivative. It is still the best linear approximation to your function at each point. I.e. $f$ differentiable at $x_0$ still means that $$ f(x_0+h) = f(x_0) + f'(x_0)h + \varphi(h), $$ for $h$ in some nbhd. of $0$ and $\varphi$ a continuous function that tends to $0$ at $0$ faster than linear. Except now $f'(x_0)$ is an $m \times n$ matrix called the Jacobian matrix and much more complicated to work with.
The first new thing we get when we go to higher dimensions is the notion of directional derivative, i.e. "how much does $f$ change in the direction of $v$?" We actually already have that on $\mathbb{R}$, it's just that there is only one direction (actually two, but it just comes down to a sign change), so it's never really looked at that way. The notion of directional derivative is exactly what you want when you want to generalize further to smooth manifolds, except you have to be a bit clever since you don't have an ambient space in which to have tangent vectors and instead use derivations (of e.g. smooth functions).
Integrals in multiple variables are much more complicated that the usual Riemann integral. Even when the functions are continuous. As demonstrated by Fubini's theorem.
Tangentially calculus-related (really it's more analysis, but those are related anyway): The fact that $\mathbb{R}$ has an ordering allows one to define things like the Henstock–Kurzweil integral. Such an extension of the Lebesgue integral is (AFAIK) not possible in $\mathbb{R}^n$ for $n > 1$.
It seems to me that an important difference is that while in one-variable calculus one only deals with one derivative, in multi-variable calculus there are infinitely many derivatives, the directional derivatives, a particular case of which are the partial derivatives. There are still the total derivatives for functions whose variables depend on another variable. Though all these derivatives are a generalization of the derivative of a single variable function.
The topology of $R^n$ is much more complicated than the topology of $R$. For example, a simply connected subset of $R$ is just an interval. A simply connected subset in $R^n$ can be very complicated. This matters because simple connectivity enters as a hypothesis in several theorems.
The boundary of an interval in $R$ is simply a pair of points (or 1 or 0 points if the interval is unbounded). But the boundary of an open set in $R^n$ can be a manifold or something more complicated. Since the fundamental theorem of calculus requires integrating over a boundary, the theorem is more complicated in several variables. Integration over the boundary of an interval in $R$ is simply evaluating a function at two points. Integration over a manifold is more subtle and requires a large amount of machinery to do formally.