Taylor series of a polynomial
Note: Rewritten essentially from the ground up.
Sorry for the somewhat obfuscated previous version. I did not realize at first that you were trying to write the end result as a polynomial in $x$ instead of $(x-x_c)$, and then I was trying to bootstrap that into what I had already written.
I'm going to use $a$ instead of $x_c$, because it is a little simpler to type and harder to confuse with $x$, if you don't mind.
The simplest way to think about what you are doing is that it is the composition of three linear transformations.
First you take a polynomial written in terms of the basis $\beta=[1,x,x^2,\ldots,x^N]$ of the space of polynomials degree at most $N$, and you rewrite it in terms of the basis $\gamma=[1,(x-a),(x-a)^2,\ldots,(x-a)^N]$. That is, a change of coordinates.
Then you take the polynomial written in terms of $\gamma$, and you project down to the subspace generated by $[1,(x-a),\ldots,(x-a)^M]$. That is, you erase the terms of degree $M+1,\ldots,N$. Call this $P^N_M$.
Finally, you take the resulting polynomial, which is written in terms of the basis $\gamma$, and you translated it back into the basis $\beta$; another change of coordinates.
Being a composition of three linear transformations, it can be represented by a matrix which is the product of three matrices, the matrices that represent the three operations.
The matrix for $P^N_M$ is the simplest one: it is a block-diagonal matrix that has the $(M+1)\times (M+1)$ identity in the first block, and the $(N-M)\times(N-M)$ zero matrix as the second block.
The matrix for step 3 is just the inverse of the matrix for step 1. To each real number $r$, we have a matrix whose columns represent the binomial expansion of $(x+r)$: $$B_r = \left(\begin{array}{cccccc} 1 & r & r^2 & r^3 & \cdots & r^N\\\ 0 & 1 & 2r & 3r^2 & \cdots & \binom{N}{1}r^{N-1}\\\ 0 & 0 & 1 & 3r & \cdots & \binom{N}{2}r^{N-2}\\\ 0 & 0 & 0 & 1 & \cdots & \binom{N}{3}r^{N-3}\\\ \vdots & \vdots & \vdots & \ddots & \vdots\\\ 0 & 0 & 0 & 0 & \cdots & \binom{N}{N-1}r\\\ 0 & 0 & 0 & 0 & \cdots & 1 \end{array}\right).$$ When you go from having the basis given by powers of $x$ to the basis given by powers of $x-a$, you do this by multiplying by $B_a$. In order to go from the basis given by the powers of $(x-a)$ to the powers of $x$, the change of variable $u=x-a$ shows that you are going form powers of $u$ to powers of $u+a = u-(-a)$, so you achieve this by multiplying by $B_{(-a)}$. Thus, your operation is given by the matrix $$B_{(-a)}P^N_MB_a.$$ Now, this is a product of three $(N+1)\times (N+1)$ matrices, so you may wonder why I have an $(N+1)\times (N+1)$ matrix and you have an $(M+1)\times (N+1)$ matrix. The reason is that because the last $N-M$ rows of $P_M$ are all zero and all matrices are upper triangular, the last $N-M$ rows of this product will likewise be zero; you are just omitting them.
So, if you have the polynomial $C_0 + C_1x + \cdots +C_Nx^N$, then your end result is the polynomial $K_0 + K_1x+\cdots +K_Nx^N$, where $$\left(\begin{array}{c} K_0\\ K_1\\ \vdots\\ K_N\end{array}\right) = B_{(-a)}P_MB_a\left(\begin{array}{c} C_0\\ C_1\\ \vdots\\ C_N\end{array}\right).$$
For your second example, you have $N=5$ and $M=3$. For $N=5$, you have \begin{align*} B_{a} &= \left(\begin{array}{cccccc} 1 & a & a^2 & a^3 & a^4 & a^5\\ 0 & 1 & 2a & 3a^2 & 4a^3 & 5a^4\\ 0 & 0 & 1 & 3a & 6a^2 & 10a^3\\ 0 & 0 & 0 & 1 & 4a & 10a^2\\ 0 & 0 & 0 & 0 & 1 & 5a\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)\\ P^5_3 &= \left(\begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right)\\ B_{-a} &=\left(\begin{array}{rrrrrr} 1 & -a & a^2 & -a^3 & a^4 & -a^5\\ 0 & 1 & -2a & 3a^2 & -4a^3 & 5a^4\\ 0 & 0 & 1 & -3a & 6a^2 & -10a^3\\ 0 & 0 & 0 & 1 & -4a & 10a^2\\ 0 & 0 & 0 & 0 & 1 & -5a\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right). \end{align*} If you multiply out $B_{-a}P^5_3B_a$ you get \begin{align*} B_{-a}P^5_3B_a &= \left(\begin{array}{rrrrrr} 1 & -a & a^2 & -a^3 & a^4 & -a^5\\ 0 & 1 & -2a & 3a^2 & -4a^3 & 5a^4\\ 0 & 0 & 1 & -3a & 6a^2 & -10a^3\\ 0 & 0 & 0 & 1 & -4a & 10a^2\\ 0 & 0 & 0 & 0 & 1 & -5a\\ 0 & 0 & 0 & 0 & 0 & 1 \end{array}\right)\left(\begin{array}{cccccc} 1 & a & a^2 & a^3 & a^4 & a^5\\ 0 & 1 & 2a & 3a^2 & 4a^3 & 5a^4\\ 0 & 0 & 1 & 3a & 6a^2 & 10a^3\\ 0 & 0 & 0 & 1 & 4a & 10a^2\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right)\\ &= \left(\begin{array}{rrrrrr} 1 & 0 & 0 & 0 & -a^4 & -4a^5\\ 0 & 1 & 0 & 0 & 4a^3 & 15a^4\\ 0 & 0 & 1 & 0 & -6a^2 & -20a^3\\ 0 & 0 & 0 & 1 & 4a & 10a^2\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right) \end{align*} exactly what you have except for the two extra rows of zeros at the bottom.
Added. If you think about it, you'll see that this process will always produce a block-upper triangular matrix, with an $(M+1)\times(M+1)$ identity in the upper left and an $(N-M)\times(N-M)$ zero matrix in the bottom right; because $B_{-a}B_a = I$, the bottom rows you zero out in $B_a$ do not affect the products in the first $M+1$ columns, and they are all that matters in the last $N-M$ rows. So the only parts that need computation is the block on the upper right. This shows how it relates to your non-square matrices.
Added 2. I should have added this as soon as you said you were programming it...
There is an alternative way of computing this matrix, which may be faster when $M$ is larger than half of $N$. The idea is just to note that the matrix $P^N_M$ can be written as a difference of the identity matrix with the matrix that has the $(N-M)\times(N-M)$ identity on the bottom right and zeros elsewhere; call it $Q^N_M$; that is, $$P^N_M = I_{N+1} - Q^N_M = \left(\begin{array}{ccccc} 1 & 0 & \cdots & 0 & 0\\\ 0 & 1 & \cdots & 0 & 0\\\ \vdots & \vdots & \ddots & \vdots & \vdots \\\ 0 & 0 & \cdots & 1 & 0\\\ 0 & 0 & \cdots & 0 & 1 \end{array}\right) - \left(\begin{array}{cccccc} 0 & 0 & \cdots & 0 & 0\\\ 0 & 0 & \cdots & 0 & 0\\\ \vdots & \vdots & \ddots & \vdots & \vdots\\\ 0 & 0 & \cdots & 1 & 0\\\ 0 & 0 & \cdots & 0 & 1 \end{array}\right).$$ Then you have that $$B_{-a}P^{N}_MB_a = B_{-a}(I-Q^N_M)B_a = (B_{-a}IB_a) - B_{-a}Q^N_MB_a = I-B_{-a}Q^{N}_MB_a.$$ If $M$ is more than half of $N$, then there will be fewer operations to perform in $B_{-a}Q^N_MB_a$ than in $B_{-a}P^N_MB_a$, because $Q^N_MB_a$ has only $N-M$ nonzero rows, whereas $P^N_MB_a$ has $M+1$ nonzero rows. So you can pick which of the two computations to do: either $B_{-a}P^N_MB_a$, or $I-B_{-a}Q^N_MB_a$, and for $N$ and $M$ large, with $M$ close to $N$, using $Q^N_M$ will probably be faster than using $P^N_M$.
Here is a Mathematica routine that is (more or less) an efficient way of performing Arturo's proposal (I assume the array of coefficients cofs
is arranged with constant term first, i.e. $p(x)=\sum\limits_{k=0}^n$cofs[[k + 1]]
$x^k$):
polxpd[cofs_?VectorQ, h_, d_] := Module[{n = Length[cofs] - 1, df},
df = PadRight[{Last[cofs]}, d + 1];
Do[
Do[
df[[j]] = df[[j - 1]] + h df[[j]],
{j, Min[d, n - k + 1] + 1, 2, -1}];
df[[1]] = cofs[[k]] + h df[[1]],
{k, n, 1, -1}];
Do[
Do[
df[[k]] -= h df[[k + 1]],
{k, d, j, -1}],
{j, d}];
df]
Let's try it out:
polxpd[{c[0], c[1], c[2]}, h, 1] // FullSimplify
{c[0] - h^2*c[2], c[1] + 2*h*c[2]}
polxpd[c /@ Range[0, 5], h, 3] // FullSimplify
{c[0] - h^4*(c[4] + 4*h*c[5]), c[1] + h^3*(4*c[4] + 15*h*c[5]),
c[2] - 2*h^2*(3*c[4] + 10*h*c[5]), c[3] + 2*h*(2*c[4] + 5*h*c[5])}
Now, Arturo gave the linear-algebraic interpretation of this conversion; I'll look at this from the algorithmic point of view:
For instance, see this (modified) snippet:
n = Length[cofs] - 1;
df = {Last[cofs]};
Do[
df[[1]] = cofs[[k]] + x df[[1]],
{k, n, 1, -1}];
This is nothing more than the Horner scheme (alias "synthetic division") for evaluating the polynomial at x
. What is not so well known is that the Horner scheme can be hijacked so that it computes derivatives as well as polynomial values. We can "differentiate" the previous code snippet like so (i.e., automatic differentation):
n = Length[cofs] - 1;
df = {Last[cofs], 0};
Do[
df[[2]] = df[[1]] + x df[[2]];
df[[1]] = cofs[[k]] + x df[[1]],
{k, n, 1, -1}];
where the rule is $\frac{\mathrm d}{\mathrm dx}$df[[j]]
$=$df[[j+1]]
. "Differentiating" the line df = {Last[cofs]}
(the leading coefficient of the polynomial) requires appending a 0
(the derivative of a constant is $0$); "differentiating" the evaluation line df[[1]] = cofs[[k]] + x df[[1]]
gives df[[2]] = df[[1]] + x df[[2]]
(use the product rule, and the fact that $\frac{\mathrm d}{\mathrm dx}$cofs[[k]]
$=0$). Continuing inductively (and replacing the x
with h
), we obtain the first double loop of polxpd[]
.
Actually, the contents of df
after the first double loop are the "scaled derivatives"; that is, df[[1]]
$=p(h)$, df[[2]]
$=p^\prime(h)$, df[[3]]
$=\frac{p^{\prime\prime}(h)}{2!}$, ... and so on.
What the second double loop accomplishes is the "shifting" of the polynomial by -h
; this is in fact synthetic division applied repeatedly to the coefficients output by the first double loop, as mentioned here.