What is special in dimension $2$ (When characterizing isometries using the cofactor matrix)?
I don't know how much you want, but the moment you write things in terms of multilinear algebra, everything seems to become pretty transparent.
In general, let $V$ be an $n$-dimensional real vector space and let $L : V \to V$ be a linear transformation. It's well known that the determinant $\det(L)$ of $L$ can be invariantly defined as the unique scalar such that $$ \wedge^n L = \det(L) \operatorname{id}_{\wedge^n V}. $$ What's perhaps a little less well known is that the adjugate $\operatorname{adj}(L)$ of $L$ can be invariantly defined to be the unique linear transformation $V \to V$ such that $$ \forall v \in V, \forall \omega \in \wedge^{n-1} V,\quad \operatorname{adj}(L)(v) \wedge \omega = v \wedge (\wedge^{n-1}L)(\omega), $$ from which you can derive the well-known relationship $$ \operatorname{adj}(L) \circ L = \det(L) \operatorname{id}_V $$ at the level of linear transformations; if $L : \mathbb{R}^n \to \mathbb{R}^n$ is left multiplication by an $n \times n$ matrix $A$, then $\operatorname{adj}(L)$ is precisely left multiplication by the adjugate matrix $\operatorname{adj}(A)$ of $A$. At last, if you give $V$ an inner product, then, once more, $L \in SO(V)$ if and only if $\det(L) = 1$ and $L^\ast = \operatorname{adj}(L)$.
Let's now take a closer look at the case where $L^\ast = \operatorname{adj}(L)$. Since we have $L^\ast L = \det(L) \operatorname{id}_V$, we again find that either $L = 0$ or $L$ is invertible with $\det(L) > 0$. Restricting ourselves to the non-trivial case, if $$ R := \det(L)^{-1/2} L, $$ then $R \in SO(V)$, so that $$ L = (\det(L)^{1/2} \operatorname{id}_V) \circ R $$ is necessarily a scaled rotation; again, if $n=2$, then, as you've checked, this is the most you can say about $L$, whilst if $n \neq 2$, then you can again conclude that $\det(L) = 1$ and hence that $L = R \in SO(V)$.
This is also not a formal answer, but gives another (similar) viewpoint on the difference between dimension 2 and higher dimensions:
When $n=2$, the mapping $A \mapsto \text{cof}\, A$ is a linear, and hence the solutions to $\text{cof}\,A = A$ must form a subspace. In particular, it will be invariant to dilations. In higher dimensions $A \mapsto \text{cof}\, A$ is not linear, and in particular $\text{cof}\,\lambda A = \lambda^{n-1} \text{cof}\,A$.