Polynomial shift
This is commonly refered to as a "Taylor shift", for the reason already quoted. Unfortunately, a web search typically reveals way more results for "Taylor Swift"...
[edit]Funny enough, this holds even after you click on "Search instead for taylor shift" on Google. On the other hand, by now this very thread scored #1 for the query "taylor shift polynomial". However, "taylor swift polynomial" brings you to the "Taylor Polynomial Series formula" on Uncyclopedia, which is described as “A Country/Western musical formula invented by Taylor Swift”, as well as a Polynomials Song, a parody on Taylor Swift's "Everything has Changed". Well, plus ça change, plus c'est la même chose...[/edit]
There is a short survey of complexity results for Taylor shifts in chapter 4 of Jürgen Gerhard's Modular Algorithms in Symbolic Summation and Symbolic Integration (Springer, Lecture Notes in Computer Science, Vol. 3218, 2004; DOI: 10.1007/b104035), with pointers to the original research papers. Also, look into Computer Algebra textbooks; good buzzwords are "Taylor shift", "fast polynomial division with remainder and applications", and "FFT-based polynomial multipoint evaluation and interpolation".
Bottom line: Using, e.g., a divide-and-conquer approach, you can save roughly one order of magnitude (i.e., $n$). That brings the cost down to $\tilde O(n)$ arithmetic operations in the coefficient ring, where polylogarithmic factors are ignored. Note that these are arithmetic operations; the bit complexity will be higher by a factor of $n$ and, obviously, also depend on the size (magnitude and/or precision) of the input.
The fast algorithms are, conceptually, not too involved. But they require a fair number of asymptotically fast subroutines for polynomial arithmetic, so it's not for the faint-hearted. And when it comes to an actual implementation: that could well be a totally different beast. The naive algorithm will work quite well for low degrees; the performance of the fast algorithms depends crucially on the underlying implementation of fast polynomial arithmetic. That is to say, don't do it on your own, but look for libraries like Victor Shoup's NTL, William Hart's FLINT, Fredrik Johansson's Arb, Andreas Enge's MPFRCX, or a full-grown computer algebra system.
Also, if you are only interested in shifts by 1 ($x \mapsto x+1$), the Horner scheme-like Taylor shift should be multiplication-free, which will significantly lift the threshold when asymptotically fast methods take the lead.
[edit] I forgot to mention: if you substitute $1/x$ for $x$, you end up with a rational function. But you might want to get $q(x) = x^n p(1/x)$, and this is simply the polynomial with coefficients in reverse order, i.e. $q_i = p_{n-i}$. Also, a scaling ($x \mapsto s\cdot x$) is rather simple to achieve: just scale the $i$-th coefficient by $s^i$. This is especially cheap if $s$ is a power of two (and, thus, the multiplication is just a bitshift). For an arbitrary linear combination $x \mapsto a\cdot x+b$ or Mobius transform $x \mapsto \frac{a\cdot x +b}{c\cdot x + d}$, decompose it into such simpler steps. For a higher-order composition (i.e., compute $(f \circ g)(x) = g(f(x))$, where both $f$ and $g$ are non-linear polynomials), you probably should look into multipoint evaluation- and interpolation-based algorithms.
Yes, there is. The trick is Taylor expansion. If you wanna shift $p(x)=x^2-1$ to $p(x+1)$, then take Taylor expansion at point $x-1$ with $\Delta x=x-(x-1)=1$. $$\begin{align}p(x)&=p(x-1)+p'(x-1)\Delta x+\frac12p''(x-1)\Delta x^2\\&=(x-1)^2-1+2(x-1)+1\\&=(x-1)^2+2(x-1)\end{align}$$ Don't expand it. Just replace $x$ by $x+1$, we have $$p(x+1)=x^2+2x$$
Note this trick is only valid for analytic functions (maybe not?). But I think it be faster than substitution only for polynomials since their Taylor expansion has finite order and easier way to compute derivative.
Of course, Taylor expansion, for instance by repeated application of the Horner scheme, is about as fast as iteratively expanding the binomials and adding them up with the coefficients, that is it requires $O(n^2)$ operations.
Schönhage proposed a faster method using FFT methods (integer or floating point) based on the binomial expansion
$$\sum a_k(x+h)^k=\sum_j\left(\sum_k (k!\,a_k)\frac{h^{k-j}}{(k-j)!}\right)\frac{x^{j}}{j!}$$
where the inner sum can be interpreted as part of a convolution product of the sequences $(n!\,a_n,(n-1)!\,a_{n-1},...3!\,a_3,2!\,a_2, a_1, a_0)$ and $(1,h,\frac{h^2}{2!},\frac{h^3}{3!},...,\frac{h^n}{n!})$.