Bayesian posterior with truncated normal prior
Your derivation is correct.
I think the result is also very intuitive. As you pointed out, if you have a prior which is a normal distribution and posterior which is also a normal distribution, then the result will be another normal distribution.
$$f(\mu|x)\propto f(x|\mu) f(\mu)$$
Now suppose I came along and set a region of $f(\mu)$ to zero and scaled it by $c$ to renormalize it. For points of $\mu$ where it was not set to zero, the right-hand side of the above equation is the same except that we have to change $f(\mu) \to c f(\mu)$. Therefore, the left-hand side is also just scaled by $c$, but retains the exact shape of a normal distribution. So we end up with a scaled normal distribution, except of course for points where $f(\mu)$ is zero and the left hand side is also zero.
It won't cause problems since it's correct, although it might not be a nice function to work with if you're trying to derive something analytically. For example, the mean of your posterior is a very long, complicated expression (which I was able to find in Mathematica).
The new density is again truncated normal at $t$ with new parameters $\frac{\sigma^2\mu_0+\sigma_0^2 x}{\sigma^2+\sigma_0^2},\sigma^2\sigma_0^2/(\sigma^2+\sigma_0^2)$. The new normalising constant is $$\Phi\left( \frac{t-\frac{\sigma^2\mu_0+\sigma_0^2 x}{\sigma^2+\sigma_0^2}}{\sigma^2\sigma_0^2/(\sigma^2+\sigma_0^2)}\right )$$
So the interesting thing is that conjugacy is preserved under truncation of the prior for the mean. It would be nice to study these posteriors for a fixed $t$ and different values of the prior parameters.