A relationship between matrices, bernoulli polynomials, and binomial coefficients
Here's another way to look at this. The Bernoulli polynomials can be defined by the property
$$\int_x^{x+1} B_n(u) \, du = x^n.$$
So if we let $T$ be the operator from the set of polynomials to itself given by $(Tf)(x) = \int_x^{x+1} f(u) \, du$, then we have $(TB_n)(x) = x^n$. The operator $T$ sends $x^n$ to $$\int_x^{x+1} u^n \, du = \frac{1}{n+1}\left((x+1)^{n+1} - x^{n+1}\right) = \sum_{k=0}^{n} \frac{1}{n+1} {n+1 \choose k}x^k.$$
Writing $T$ as an infinite matrix with respect to the basis $1,x,x^2, \ldots$, gives
$$T = \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rrrr} 1 & \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \ldots\\ 0 & 1 & 1 & 1 & \ldots\\ 0 & 0 & 1 & \frac{3}{2} & \ldots\\ 0 & 0 & 0 & 1& \ldots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right). $$
We can approximate the inverse by taking the inverse of this truncation, giving the matrix form of $T^{-1}$:
$$T^{-1} = \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rrrr} 1 & -\frac{1}{2} & \frac{1}{6} & 0 & \ldots\\ 0 & 1 & -1 & \frac{1}{2} & \ldots\\ 0 & 0 & 1 & -\frac{3}{2} & \ldots\\ 0 & 0 & 0 & 1& \ldots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right)$$
This operator sends $x^n$ to $B_n(x)$, so the columns are the coefficients of Bernoulli polynomials.
To see how this fits with your equations, note that we may factor the first matrix $T$ to remove the fraction $\frac{1}{n+1}$ in the formula for its coefficients, since multiplying by a diagonal matrix on the left scales the columns of a matrix by the diagonal entries. This gives $$ T = \newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rrrr} 1 & 1 & 1 & 1 & \ldots\\ 0 & 2 & 3 & 4 & \ldots\\ 0 & 0 & 3 & 6 & \ldots\\ 0 & 0 & 0 & 4 & \ldots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right)\newcommand{\Bold}[1]{\mathbf{#1}}\left(\begin{array}{rrrr} 1 & 0 & 0 & 0 & \ldots\\ 0 & 1/2 & 0 & 0 & \ldots\\ 0 & 0 & 1/3 & 0 & \ldots\\ 0 & 0 & 0 & 1/4 & \ldots\\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array}\right)$$
The matrix on the left is your $M_P$ (infinitely extended). Calling the diagonal matrix on the right $D$, we have $T = M_pD$, or $DT^{-1} = M_P^{-1}$ which is your last equation.
To my mind, this is the most natural way to compute Bernoulli polynomials. A few years ago I was playing around with this idea before I knew what a Bernoulli polynomial was. I was slightly disappointed, although not too surprised, to hear that someone else had discovered this first. I was beaten by some three hundred years, no less. It ended up as part of my undergraduate thesis. :)
This is a very fascinating problem, but in the $P_{(4)}$ case, we have $$\tag{1} M_{P_{(4)}}^{-1}=\begin{pmatrix} 1 & -1/2 & 1/6 & 0 & -1/30\\ & 1/2 & -1/2 & 1/4 & 0\\ & & 1/3 & -1/2 & 1/3\\ & & & 1/4 & -1/2\\ & & & & 1/5 \end{pmatrix} $$ This factors out as $$ M_{P_{(4)}}^{-1}=\begin{pmatrix} 1 & -1 & 1 & 0 & -1/6\\ & 1 & -3/2 & 1 & 0\\ & & 1 & -2 & 5/3\\ & & & 1 & -5/2\\ & & & & 1 \end{pmatrix} \begin{pmatrix} 1 & & & & \\ & 1/2 & & & \\ & & 1/3 & & \\ & & & 1/4 & \\ & & & & 1/5 \end{pmatrix} $$ Unfortunately, as we can read off, the columns in the matrix on the left do not give us Bernoulli polynomials :(
Although, please note the first row of the matrix in (1) gives us Bernoulli numbers. Perhaps the first row of the inverse matrix does give you the Bernoulli numbers, but you do not obtain the Bernoulli polynomials.
Again, read this with caution and suspicion: the good reader should compute this and verify it for him or herself!
Addendum
Write out
$$ M_{P_{(n)}} = DU $$
where $D$ is a diagonal matrix, and $U=I+X$ is a unit upper triangular matrix (here $X$ is the strictly upper triangular part of $U$). The claim is that
$$(DU)^{-1}=(I+X)^{-1}D^{-1}=(I-X+X^{2}-X^{3}+\dots+(-1)^{n}X^{n})D^{-1}$$
produces the coefficients of Bernoulli polynomials in the columns, and the first row is the Bernoulli numbers. I claim the first (Bernoulli polynomial data) implies the latter (the first row consists of Bernoulli numbers), and claim this is obvious.
Conjecture
It seems that the OP found the matrix version for the Bernoulli polynomials described on Wikipedia using forward differences.
I would have urged the OP to change notation, and use lower triangular matrices. Why? Because then you could write $$ M_{P_{(n)}}\begin{bmatrix}0\\ 0\\\vdots\\0\\ x^{n}\end{bmatrix} \cong (1+x)^{n+1}-x^{n+1} $$ in vector form. Current notation demands we use row vectors for polynomials.
I'm about to go to bed, so I do not believe I have time to prove my conjecture. If no one has proven it by tomorrow, I'll try writing up a proof.