# truncation before matrix exponential: how to do it right?

I found the answer, at least for some of the cases that I thought were intractable, including my examples. The main tool is the disentangling theorem for the relevant group.

**Example 1**: the transformation is in $SU(2)$, so we need the $SU(2)$ disentangling theorem:
$$
\exp(z J_+-z^* J_-) = \exp(\tau_+ J_+)\exp(\tau_ 0J_0)\exp(\tau_-J_-)
$$
Here $\{J_0,J_\pm\}$ satisfy the $su(2)$ algebra relations: $[J_\pm,J_0]=\mp J_\pm,\ [J_-,J_+]=-2J_0$, and if $z=re^{i\phi}$, then $\tau_\pm=\pm e^{\pm i\phi}\tan(r),\tau_0=2\log\sec(r)$.
If we apply this to the beamsplitter transformation in the form $U(\theta)=\exp[\theta(a^\dagger b-ab^\dagger)]$, we obtain
$$
U(\theta) = \exp(\tan(\theta)a^\dagger b)\exp[\log\sec(\theta)(a^\dagger a-b^\dagger b)]\exp(-\tan(\theta)a b^\dagger)
$$
which has no truncation issues.

**Example 2**: the transformation is in $SU(1,1)$, so we need the $SU(1,1)$ disentangling theorem:
$$
\exp(z K_+-z^* K_-) = \exp(\sigma_+ K_+)\exp(\sigma_0K_0)\exp(\sigma_-K_-)
$$
Here $\{K_0,K_\pm\}$ satisfy the $su(1,1)$ algebra relations: $[K_\pm,K_0]=\mp K_\pm,\ [K_-,K_+]=2K_0$, and if $z=re^{i\phi}$, then $\sigma_\pm=\pm e^{\pm i\phi}\tanh(r),\sigma_0=-2\log\cosh(r)$. If we apply this to the squeezing operator $S(z)=\exp[z \frac{{a^\dagger}^2}{2}-z^* \frac{{a}^2}{2}]$, we obtain
$$
S(z) = \exp\left(e^{i\phi}\tanh(|z|)\frac{{a^\dagger}^2}{2}\right)\exp\left(-\log\cosh(|z|)(a^\dagger a+\frac{1}{2})\right)\exp\left(-e^{-i\phi}\tanh(|z|)\frac{{a}^2}{2}\right)
$$
which has no truncation issues.

**General case**: in general it might be impossible to have a suitable disentangling theorem, but results like this might help approximating it (see eq. (30)-(35) for the two examples above).

I hope this will help those who will stumble upon my same issue.

As a practical matter, this is something where you expect the exponentiation to converge as the size of the truncated input grows. So, without knowing anything about the problem in question, I would

- find the desired size of the answer matrix (say, $n\times n$),
- truncate the input matrix at twice that size ($2n \times 2n$) and calculate the exponential,
- calculate the exponential again at one larger ($[2n+1]\times[2n+1]$),
- see if the $n\times n$ matrix has converged to the desired numerical accuracy, if it has, terminate, if not, double again, etc.

Ideally, as @Adam mentioned in a comment, you would find the eigenvectors and eigenvalues of the operator/matrix in question to do the calculation. Examining that approach is instructive in figuring out whether the above algorithm can converge. If the expression for the original operator is: $$A_{ij} = \sum_{k} V_{ki}^\star \lambda_k V_{kj},$$ then the exponential is: $$[\exp(A)]_{ij} = \sum_{k} V_{ki}^\star \mathrm{e}^{\lambda_k} V_{kj}.$$

Convergence of the second expression requires that the eigenvectors do not mix the states too widely, otherwise the sum will diverge. For example, if the eigenvalues grow linearly with $k$ then the eigenvectors have to fall faster than exponentially in $i-k$. In the ideal case, each eigenvector will have a finite number of components, guaranteeing convergence, but that may not always be the case.