Proving that similarity transformation of state-space preserves the Euclidean norm
The problem here is really the notation.
This is adding to the answer by obareey a bit more details. Given a linear system
$$ \begin{align} \dot{x} &= A x + B u \\ y &= C x + D u \\ \end{align} $$
In this question $D = 0$, but that doesn't really change anything. Then i.e. this is wrong:
$$ G = \left[ \begin{array}{c|c} A&B\\ \hline C&D \end{array} \right] $$
As mentioned by obareey, $G$ is not a real matrix but instead a transfer function matrix. Often this is written instead like
$$ G(s) \sim \left[ \begin{array}{c|c} A&B\\ \hline C&D \end{array} \right] $$
sometimes also written as
$$ G(s) \triangleq \left[ \begin{array}{c|c} A&B\\ \hline C&D \end{array} \right] \text{ or } G(s) \overset{s}= \left[ \begin{array}{c|c} A&B\\ \hline C&D \end{array} \right] $$
Unfortunatly the notation varies and I have seen all of these. Some authors even use an equal sign, which makes things really confusing (or: wrong). However they really all mean the same, namely that $G(s) = C(s I - A)^{-1} B + D$ and $I$ is an identity matrix.
The $2$-norm of a general MIMO transfer function is
$$ \Vert G(s) \Vert_2 = \Big( \frac{1}{2 \pi} \int_{-\infty}^{\infty} \text{trace} \big[ G(j \omega)^H G(j \omega) \big] d \omega \Big)^{1/2} $$
so of course $\Vert G(s) \Vert_2 = \Vert \widetilde{G}(s) \Vert_2$ when $G(s) = \widetilde{G}(s)$, assuming that $A$ is stable so that the integral exists, i.e. $\Vert G(s) \Vert_2 < \infty$. So all you need to show is that
$$ C(s I - A)^{-1} B + D = \widetilde{C}(s I - \widetilde{A})^{-1} \widetilde{B} + \widetilde{D} $$
using an invertible $T$ and
$$ \begin{align} \widetilde{A} &= T A T^{-1} \\ \widetilde{B} &= T B \\ \widetilde{C} &= C T^{-1} \\ \widetilde{D} &= D \end{align} $$
This is standard realization theory. You can check that both transfer functions are the same:
$$ \begin{align} \widetilde{G}(s) &= \widetilde{C}(s I - \widetilde{A})^{-1} \widetilde{B} + \widetilde{D} \\ &= C T^{-1}(s I - T A T^{-1})^{-1} T B + D \\ &= C \big(T^{-1}(s I - T A T^{-1})T\big)^{-1}B + D \\ &= C \big(T^{-1} s I T - T^{-1} T A T^{-1} T\big)^{-1}B + D \\ &= C(s I - A)^{-1} B + D \\ &= G(s) \end{align} $$
because $T^{-1} s I T = s(T^{-1} I T) = s(T^{-1} T) = s I$ since $T^{-1} T = I$ and the fact that $(K_1 K_2 K_3)^{-1} = K_3^{-1} K_2^{-1} K_1^{-1}$ for some invertible matrices $K_1, K_2, K_3$.
So because $G(s) = \widetilde{G}(s)$ their $2$-norms are the same.