Why do you use n-1 in Standard error of the mean but n in hypothesis testing
The $n-1$ term DOES NOT appear in the formula for standard error as you have written it. The $n-1$ term does appear, however, in the equations for the sample variance and the sample standard deviation. It is used to correct for the fact that $ \hat{\sigma^2} = \frac{1}{n} \sum(x_i - \bar{x})^2 $ is a biased estimator of the variance. This can be shown as follows:
$ \hat{\sigma^2} = \frac{1}{n} \sum(x_i - \bar{x})^2 = \frac{1}{n} \sum (x_i^2 - 2x_i\bar{x} + \bar{x}^2) = \frac{1}{n} \sum (x_i^2 - x_i\bar{x} - x_i\bar{x} + \bar{x}^2) $
$ = \frac{1}{n} \sum (x_i[x_i-\bar{x}] - \bar{x}[\bar{x}-x_i]) = \frac{1}{n} \sum (x_i[x_i - \bar{x}]) - \frac{\bar{x}}{n} \sum [\bar{x} - x_i] $
Since $ \frac{\sum [\bar{x} - x_i]}{n} = 0 $, we get:
$ = \frac{1}{n} \sum (x_i[x_i - \bar{x}]) = \frac{1}{n} \sum ({x_i}^2 - {x_i}\bar{x}) = \frac{\sum {x_i}^2}{n} - \bar{x} \sum \frac{x_i}{n} = \frac{\sum {x_i}^2}{n} - \bar{x}^2 $
This means that:
$ E[\hat{\sigma^2}] = E[X^2] - E[\bar{x}^2] $
We know that:
1) $ \sigma^2 = E[X^2] - (E[X])^2 \rightarrow E[X^2] = \sigma^2 + (E[X])^2 $
2) $ \bar{\sigma}^2 = \frac{\sigma^2}{n} = E[\bar{x}^2] - (E[\bar{x}])^2 \rightarrow \frac{\sigma^2}{n} + (E[\bar{x}])^2 $
Now substitute these equations back into the above equation to get:
$ E[\hat{\sigma^2}] = \sigma^2 + (E[X])^2 - (\frac{\sigma^2}{n} + (E[\bar{x}])^2) = \sigma^2 - \frac{\sigma^2}{n} = \sigma^2 (\frac{n-1}{n}) $
To get an unbiased estimator for $ \sigma^2 $, we multiply $ \hat{\sigma^2} $ by $ \frac{n}{n-1} $ to get:
$ s^2 = \frac{n}{n-1} \times \frac{1}{n} \sum(x_i - \bar{x})^2 = \frac{1}{n-1} \sum(x_i - \bar{x})^2 $
The quantity $ s^2 $ is known as the sample variance, and $ s $ is the sample standard deviation. The standard error is simply $ \frac{s}{\sqrt{n}} $.
The standard error of the sample mean actually is $$ \frac s{\sqrt n} $$ (there is no $n-1$ term here).
In hypothesis testing and confidence intervals you use $Z=(\bar X-\mu)/(\sigma/\sqrt n)$ because you are using the Central Limit Thorem that states that the sample mean $\bar X$ has a normal distribution with standard deviation $\sigma/\sqrt n$. If you use the sample standard deviation this is repaced by $s/\sqrt n$ and the normal is replaced by a Student distribution.
Now, the $n-1$ comes into play when computing $$ s=\sqrt{\frac{1}{n-1}\sum (x_i-\bar x)^2}, $$ and it is there to account for the fact that, when computing $s$, you use the sample mean $\bar x$ in place of the (real, unknown) mean $\mu$.
In this answer, it is shown that since the sample data is closer to the sample mean, $\overline{x}$, than to the distribution mean, $\mu$, the variance of the sample data, computed with $$ \frac1n\sum_{k=1}^n\left(x_k-\overline{x}\right)^2 $$ is, on average, smaller than the distribution variance. In fact, on average, $$ \frac{\text{variance of the sample data}}{\text{variance of the distribution}}=\frac{n-1}{n} $$ This is why we use $$ \frac1{n-1}\sum_{k=1}^n\left(x_k-\overline{x}\right)^2 $$ to estimate the distribution variance given the sample data.