Summation of divergent series of Euler: $0!-1!+2!-3!+\cdots$
Hints:
- For every fixed $i$, identify the coefficient of $x^i$ in the LHS and in the RHS.
- Deduce that the task is to prove that $\lim\limits_{n\to\infty}\dfrac{\omega_{i,n}+\cdots+\omega_{n,n}}{\omega_{0,n}+\cdots+\omega_{n,n}}=1$, for every fixed $i$.
- Equivalently, one must show that $\lim\limits_{n\to\infty}\dfrac{\omega_{k,n}}{\omega_{0,n}+\cdots+\omega_{n,n}}=0$, for every fixed $k$.
- Deduce that, if $\lim\limits_{n\to\infty}\dfrac{\omega_{k,n}}{\omega_{k+1,n}}=0$, for every fixed $k$, then the result holds.
- Compute the ratio $\dfrac{\omega_{k,n}}{\omega_{k+1,n}}$ and its limit when $n\to\infty$, and conclude.
Since did already said a lot, let me just offer a few more observations that are too long for a comment:
The limit of the weighted mean summation (i.e. somewhat akin to Cesàro summation, but with weights) being considered might look a bit less forbidding if rewritten in terms of forward differences:
$$A_n=\frac{s_0\omega_{0,n}+\cdots + s_n\omega_{n,n} }{\omega_{0,n}+\dots+\omega_{n,n}}=\left.\frac{\Delta^n\left(\frac{s_k}{(-1)^k k!}\right)}{\Delta^n\left(\frac1{(-1)^k k!}\right)}\right|_{k=0}$$
In fact, what you are looking at is a special case of a family of extrapolation algorithms studied by Avram Sidi (see e.g. this or this or this). In those references, the relation between Borel summation (which is the usual method applied for summing divergent series like the one in the OP) and these extrapolation methods is noted. An efficient recursive algorithm for computing these extrapolations is also given.
On the other hand, the convergence of the OP's particular special case is not too good, compared to some other cases of the general extrapolation method, e.g.
$$\begin{align*} \mathcal D_n&=\left.\frac{\Delta^n\left(\frac{s_k}{(-1)^k k! x^k}\right)}{\Delta^n\left(\frac1{(-1)^k k! x^k}\right)}\right|_{k=0}\\ \mathcal L_n&=\left.\frac{\Delta^n\left(\frac{s_k (k+1)^{n-1}}{(-1)^k k! x^k}\right)}{\Delta^n\left(\frac{(k+1)^{n-1}}{(-1)^k k! x^k}\right)}\right|_{k=0}\\ \mathcal S_n&=\left.\frac{\Delta^n\left(\frac{s_k (k+1)_{n-1}}{(-1)^k k! x^k}\right)}{\Delta^n\left(\frac{(k+1)_{n-1}}{(-1)^k k! x^k}\right)}\right|_{k=0} \end{align*}$$
where $(a)_k$ is the Pochhammer symbol. In particular, the transformation $\mathcal D_n$ is a convergence acceleration method studied by Drummond, while $\mathcal L_n$ is in fact the Levin $t$-transformation (see this related question), and $\mathcal S_n$ is the modification of the Levin transformation due to E.J. Weniger.
Here's a short table comparing the numerical performance of the various transformations with the regularized sum, $F(x)=\frac1{x}\exp\left(\frac1{x}\right)E_1\left(\frac1{x}\right)$ ($E_1(x)$ is the exponential integral), taking $n=10$:
\begin{array}{c|cc}x&F(x)&A_{10}&\mathcal D_{10}&\mathcal L_{10}&\mathcal S_{10}\\\hline 0.1&0.915633339&0.915625503&0.915633339&0.915633339&0.915633339\\ 0.2&0.852110881&0.851748876&0.852110881&0.852110881&0.852110881\\ 0.5&0.722657234&0.722476442&0.722656381&0.722657245&0.722657234\\ 0.75&0.650812854&0.650744812&0.650803895&0.650812841&0.650812848\\ 1&0.596347362&0.596310789&0.596310789&0.596347200&0.596347353\\ 1.5&0.517329839&0.517327162&0.517138593&0.517329576&0.517329988\\ 2&0.461455316&0.476549262&0.460953049&0.461456123&0.461455825\\ \end{array}
Larger values of $n$ will give better results, up to a point, when subtractive cancellation from subtracting large terms sets in.