Simple proof that if $A^n=I$ then $\mathrm{tr}(A^{-1})=\overline{\mathrm{tr}(A)}$

Addressing the question raised in the comments:

Claim. Suppose that the minimal polynomial of $A$ has distinct roots. Then $A$ is diagonalizable, and conversely.

Proof. Write $m_A = (X-\lambda_1)\cdots (X-\lambda_t)$ and set $m_i = m_A/(X-\lambda_i)$. Then we can find polynomials $p_i$ such that $1=\sum_{i=1}^t p_i m_i$. Moreover, $v_i = p_i(A)m_i(A)v$ is either zero or such that $(A-\lambda_i)v=0$, so that $v = \sum_{i=1}^tv_i$ is a sum of eigenvectors of $A$. This shows $V$ is a sum of eigenspaces, and the sum is direct since the $\lambda_i$ are distinct. $\blacktriangleleft$

Since $X^n-1$ annhiliates $A$, $m_A$ has distinct roots, so $A$ is diagonalizable. Since its trace is the sum of its eigenvalues, which are roots of unity, the proposed argument in $(1)$ goes through.


Since the question attracted quickly four votes, i'll try to use minimal known linear algebra to get the result. (One more comment. Since the complex conjugation is involved, there is no purely algebraic proof, e.g. one that uses polynomial/functional calculus in $A$.)

We start with $A$ a matrix, an endomorphism of a vector space $V$ of finite dimension $\ge 1$ over $\Bbb C$, such that for a suitable natural $n$ we have $$A^n=I\ .$$

Let $v\ne 0$ be a vector in $V$. The sequence $v, Av, A^2 v,\dots A^nv=v, \dots$ is periodic. Let $d$ be its period, $d$ is a divisor of $n$. If $d=1$ we record this $v$, set $w=v$. Else, let $\xi$ be a primitive $d$-root of unity in $\Bbb C$, e.g. $\xi=\exp\frac {2\pi\, i}d$ if we want to fix the ideas (and leave algebra). Consider the following vectors in $V$: $$ \begin{aligned} w_0 &=v +Av+\dots+A^{d-1}v\ ,\\ w_1 &=v +\xi Av+\dots+(\xi A)^{d-1}v\ ,\\ \ \vdots\ \vdots\ &\qquad \vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\\ w_k &=v +(\xi^k A)v+\dots+(\xi^k A)^{d-1}v\ ,\\ \ \vdots\ \vdots\ &\qquad \vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\\ w_{d-1} &=v +(\xi^{d-1} A)v+\dots+(\xi^{d-1} A)^{d-1}v\ . \end{aligned} $$ If at least one of these vectors is $\ne 0$, then we record it, and set $w$ to be one choice among them. Else?! Else we have the situation, which is formally described by the following relation: $$ \underbrace{ \begin{bmatrix} 1 & 1 & 1 & \dots & 1\\ 1 & \xi & \xi^2 &\dots & \xi^{d-1}\\ 1 & \xi^2 & \xi^4 &\dots & \xi^{2(d-1)}\\ \vdots &\vdots &\vdots &\ddots &\vdots\\ 1 & \xi^{d-1} & \xi^{2(d-1)} &\dots & \xi^{(d-1)(d-1)} \end{bmatrix} }_{\text{Vandermonde}(1,\xi,\dots,\xi^{d-1})} % \begin{bmatrix} v \\ Av\\ A^2 v\\\vdots\\A^{d-1}v \end{bmatrix} = \begin{bmatrix} 0 \\ 0\\ 0\\\vdots\\0 \end{bmatrix} \ . $$ The Vandermonde matrix is invertible, so we formally multiply from left with its inverse. To be exact, this is reflected then in building linear combinations in the given formulas for $w_0,w_1,\dots,w_{d-1}$ to isolate $0=v=Av=\dots$ , which gives a contradiction. Doing this we have constructed a $w$ such that $Aw=\xi^? w$ for a root of unit $\xi^?$. We consider $V'$, the quotient space or some subspace of $V$ generated by "the other" vectors that extend the linear independent system $\{w\}$ to a basis, and consider the same problem with the induced / restricted $A$ on $V'$.

Inductively we get a basis of $V$ on which $A$ acts diagonally (or upper triangular if taking quotients), and the elements on the diagonal are $\xi_1,\xi_2,\dots$ all of them roots of unit.

Then the inverse matrix has the same shape with diagonal $\xi_1^{-1},\xi_2^{-1},\dots$ and the equality involving traces can be equivalently traced back to: $$ \frac 1{\xi_1}+ \frac 1{\xi_2}+ \dots = \overline{\xi_1}+ \overline{\xi_2}+ \dots $$ which is true.


If the trouble is just a simple proof for the fact that $$ \text{tr}\,A=\sum_{i=1}^n\lambda_i $$ you can try the following approach instead of JNF.

  1. In the field $\Bbb C$ we can factorize $\det(\lambda I-A)=\prod_{i=1}^n(\lambda-\lambda_i)$ where $\lambda_i$ are all eigenvalues (possibly repeated with multiplicities). The coefficient for $\lambda^{n-1}$ is $\color{red}{-\sum_{i=1}^n\lambda_i}$.
  2. Prove (e.g. expanding the determinant along the first column + induction) that the coefficients for $\lambda^n$ and $\lambda^{n-1}$ are build from the main diagonal product only \begin{align} \det(\lambda I-A)&=\begin{vmatrix}\lambda-a_{11} & -a_{12} & \ldots & -a_{1n}\\-a_{21} & \lambda-a_{22} & \ldots & -a_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ -a_{n1} & -a_{n2} & \ldots & \lambda-a_{nn} \end{vmatrix}=\prod_{i=1}^n(\lambda-a_{ii})+<\text{terms of $\deg\le n-2$}>=\\ &=\lambda^n\color{red}{-\text{tr}\,A}\,\lambda^{n-1}+<\text{terms of $\deg\le n-2$}>. \end{align} It is because for any cofactor $A_{j1}$, $j>1$, in the first column you remove two $\lambda-a_{ii}$ elements: one from the first column and one from the $j$th row.
  3. Compare the red coefficients to conclude.