Is every non-negative test function the limit of a sequence of sums of squares of test functions?
Given $f\in\mathcal{D}(\mathbb{R}^n)$, $f\ge0$ choose a $g\in\mathcal{D}(\mathbb{R}^n)$, $g\ge0$ such that $\mathrm{supp}(f)\subset \{g>0\}$, and let $\epsilon>0$. Then $\sqrt{f+\epsilon^2 g^2}$ is $C^\infty$: by composition, at any point where $g(x)>0$, and because it locally coincides with $\epsilon g$, at any point where $g(x)=0$.
It is possible I have misunderstood the topology on $\mathcal{D}(\mathbb{R}^n)$; I assumed it means that all the derivatives must converge. With that understanding, let $a \in (0,3]$. I claim that $x^4 y^2+y^4 z^2+z^4 x^2 - a x^2 y^2 z^2$ cannot be written as a sum $\sum f_i(x,y,z)^2$ of $C^3$ functions. Note that this function is nonnegative by the arithmetic-geometric mean inequality.
If we had $x^4 y^2+y^4 z^2+z^4 x^2 - a x^2 y^2 z^2 = \sum f_i(x,y,z)^2$ then the degree $k$ part of the Taylor series of the right hand side would converge to the degree $k$ part of the Taylor series of the left hand side. From this we see that the $f_i$ vanish up to third order and, writing $c_i$ for the cubic part of the Taylor series of $f_i$, we have $x^4 y^2+y^4 z^2+z^4 x^2 - a x^2 y^2 z^2 = \sum c_i^2(x,y,z)$. This is the standard example of a nonnegative polynomial which cannot be written as a sum of squares, so now we just hav to show that the same proof shows that it cannot be written as an infinite sum of squares.
There are no $x^6$, $y^6$ or $z^6$ terms on the right hand side, so the $c_i$ have no $x^3$, $y^3$ or $z^3$ terms. Similarly, looking at the coefficients of $x^2 y^4$, $y^2 z^4$ and $z^2 z^4$, we see that there are no $x y^2$, $y z^2$ or $z x^2$ terms. So $c_i(x,y,z) = p_i x^2 y + q_i y^2 z + r_i z^2 x + s_i x y z$. But then $-a = \sum s_i^2$, a contradiction.
The last counterexample posted here led me to think a bit on the theme "What can be done then?". I came out with the following simple crank machine, that crumbles a positive $C^1_c$ function $f$ into a $C^1$ convergent series of squares, and that could also work in other ordered algebras (in fact I suspect it may be a known procedure). We may assume with no loss of generality $0\le f\le1/2$.
Consider the sequence of functions: $$\begin{cases} g_1=f\\ g_{n+1}=g_n-g_n^2 \end{cases}$$
It is immediate by the definition that $f=\sum_{k=1}^n g_k^2 +g_{n+1}$, so that we have $f=\sum_{k=1}^\infty g_k^2$ in various senses, provided that the remainder $g_{n}$ converges to zero in the same sense.
Uniform convergence of $g_n$: We have $0\le g_n\le1$ and ${1\over g_{n+1}}={1\over g_{n}}+{1\over 1-g_{n}}\ge {1\over g_{n}}+1,$ hence ${1\over g_{n}}\ge n$ and $g_n\le {1\over {n}}$.
Uniform convergence of $\partial_i g_n$: We have $\partial_i g_{n+1}=(1-2g_n)\partial_ig_n$. Since $0\le 1-2g_n\le1-g_n={g_{n+1}\over g_n}\le 1$ we also have $|\partial_i g_{n+1}|\le {g_{n+1}\over g_n}|\partial_ig_n|\le |\partial_ig_n|$. Iterating the latter inequalities we get
$$|\partial_i g_{n}|\le {g_{n}\over f}|\partial_if|\le|\partial_i f|$$ which immediately implies the uniform convergence of $\partial_i g_{n}$ to $0$ (indeed $\partial_i g_{n}$ converges uniformly to zero on any set $\{f\ge\epsilon\}$ and it is dominated by $|\partial_i f|$).
$$*$$
Rmk if for instance $f(x)=x^2+o(x^2)$ for $x\to0$ then also $g_n(x)=x^2+o(x^2)$ so that $g_n''(0)$ can't converge to zero, as illustrated in the last post.
On the other hand, by David Speyer's example, we know that, in general, for a smooth $f$, no series can't be convergent in $C^3$.