Constructive proof of a rational version of Perron-Frobenius?
A course in constructive algebra by Mines, Richman & Ruitenburg, covers the constructive development of discrete fields and their algebraic completion, and even their valuation-metric completion.
This is a very direct and clear development, showing that for discrete fields (such as $\mathbb{Q}$), the algebraic closure (in this case $\mathbb{A}$) is again a discrete field. In other words, for your situation, the characteristic polynomial of $A$ can be constructively factored over $\mathbb{A}$, which should give you enough constructive traction to prove a constructive analogue of the two theorems you give above.
However, given that you speak of non-negative matrices (not necessarily irreducible for theorem 1), I wonder if thms. 1 and 2 are true constructively.
For both theorems, since you do not restrict to positive matrices, approximation with positive matrices is necessary. Usually, we then cannot conclude sharply things like: "$Av \geq v$ or $Av < v$" (thm. 1), "$Av > v$ or $Av = v$ or $Av < v$" (thm. 2), and even "$v \in \mathbb{Q}^n$" (both thms.).
So I would advise to first prove the weaker versions where $A$ is a positive matrix, and then study where approximations come in for proving your constructive analogues.
[update to reflect the comments below:]
Constructive Perron-Frobenius for positive matrices follows easily from the above, since (obviously) once we split the characteristic polynomial $p(A)$ over $\mathbb{A}$, we will find its roots, which have decidable equality as well as decidable membership of $\mathbb{R}$. From the classical theory and KG (the Kleene Getaway, see below), it follows constructively that we find exactly one positive real zero $s$ of $p(A)$ such that $s$ has strictly the largest absolute value of all the zeros of $p(A)$.
[In essence KG boils down to this: we know that we find exactly one positive real zero $s$ with these properties, because a) we can factor $p(A)$ constructively, b) positivity and largest absolute value are decidable properties for roots of $p(A)$, and c) it is impossible that there is no positive root having the largest absolute value, since that would contradict classical mathematics.]
The $s$-corresponding eigenvector can now be found using compactness and uniform continuity (of the linear function given by $A$ on $\mathbb{R}^n$). There most likely is a more elegant algebraic way, but I'm more a lazy topologist so I use brute topological force...
[second update:]
On a more elegant note, notice that the field of the algebraic reals $^r\mathbb{A}:= \mathbb{A}\cap\mathbb{R}$ gives rise to the discrete $n$-dimensional vector space $(^r\mathbb{A})^n$ (over $^r\mathbb{A}$). That means that we can actually calculate the Perron-Frobenius eigenvector $v$ by simple linear algebra, since the rank of $A-sI$ is $n-1$ and $A-sI$ is a matrix over a discrete field (the eigenvector spans the one-dimensional kernel of $A-sI$). So we easily find $v\in (^r\mathbb{A})^n$, and then, because of the classical theory and because $^r\mathbb{A}$ is discrete, by KG (see below) we immediately (and constructively!) deduce the positivity of $v$.
[In essence KG boils down to this: the eigenvector v is positive, because a) we can find v constructively, b) positivity is decidable for vectors in $(^r\mathbb{A})^n$, and c) it is impossible that v is not positive, since that would contradict classical mathematics.]
This then proves:
Theorem (Constructive Perron-Frobenius for positive algebraic matrices):
Let $A$ be a a positive $n\times n$ matrix with entries in the real algebraic field $^r\mathbb{A}$. Then:
a) there is a positive real algebraic number $s$, called the Perron–Frobenius eigenvalue, such that $s$ is an eigenvalue of $A$ and any other eigenvalue $\lambda$ (possibly complex) is strictly smaller than $s$ in absolute value. Moreover, $s$ is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to $s$ is one-dimensional.
b) there is an $s$-eigenvector $v$ of $A$ such that all components of $v$ are positive algebraic.
[third update:]
Yet even for positive $A$, I don't see how theorem 1 above would constructively follow immediately. The difficulty lies in the case where the Perron-Frobenius eigenvalue is $1$. When we then try to approximate the eigenvector by rational vectors, we cannot assume (I think) that we can do so in an order-positively or order-negatively consequent way (regarding $A(u)$ vs $u$, for approximating rational vector $u$).
What we do have is an interesting almost-generalization to the algebraic reals:
Theorem 1$^*$
Let $A$ be a positive $n\times n$ matrix with entries in the real algebraic field $^r\mathbb{A}$. Then at least one of the following two statements is true:
a) there is a vector $w$ in $\mathbb{Q}^n$ such that either $A(w)<w$ or $w<A(w)$
b) for every $\epsilon>0$ there is a vector $z$ in $\mathbb{Q}^n$ such that $|A(z)-z|<\epsilon$.
Proof Case a) corresponds to the case where the Perron-Frobenius eigenvalue $s$ is apart from $1$. Case b) corresponds to $s=1$. We can decide in which case we find ourselves sinds $s$ is in $^r\mathbb{A}$.
In both cases we can approximate the Perron-Frobenius eigenvector by rational vectors, which then gives the theorem. (end of proof).
Of course, a very nice alternative is a direct corollary of the constructive algebraic version of Perron-Frobenius itself, as explained above:
Theorem 1$^{**}$ (corollary of the constructive Perron-Frobenius thm. for positive algebraic matrices)
Let $A$ be a positive $n\times n$ matrix with entries in the real algebraic field $^r\mathbb{A}$. Then there is a vector $w$ in $(^r\mathbb{A})^n$ such that either $A(w)<w$ or $w<A(w)$ or $A(w)=w$.
I will leave other analogues to the OP.
[fourth update:]
KG (the Kleene Getaway) is a constructive principle which enables the direct constructive use of a classical result in the following situation:
Suppose T is a classical theorem of analysis/topology/... provable in a formal system $\mbox{F}$ which is constructively 'sound' in the sense that $\mbox{F}$ is constructively equiconsistent with its constructive subsystem $\mbox{F}$$_\mbox{c}$, obtained by leaving out LEM.
Now suppose that we have an $x$ and a property $P$ such that:
a) $P(x)$ is constructively decidable (so we can decide $P(x)$ or $\neg(P(x)$)
b) $P(x)$ holds classically by T
Then KG states that $P(x)$ holds constructively as well.
Proof: The proof is actually quite simple, on the proof-level of the formal system, since $\neg (P(x))$ is impossible by b) and the equiconsistency of $\mbox{F}$ and $\mbox{F}$$_\mbox{c}$
[fifth update: Matt F. pointed out in this question on MO that KG is proved (but unnamed) in Michael Beeson, “Some Relations between Classical and Constructive Mathematics”, Journal of Symbolic Logic 1978, see here on JStor.]
One of Kleene's highest achievements in my opinion is one of the least known: his seminal work The Foundations of Intuitionistic Mathematics - especially in relation to recursive functions (usually abbreviated FIM).
In FIM Kleene developed a formal system (also called FIM...) and 4 years later, through function realizability methods, he showed equiconsistency of the classical extension of a basis theory of FIM and the intuitionistic extension of that basic system.
So, unnoticed by many: intuitionistic mathematics is as sound as classical mathematics, in the comparative realm of say separable analysis, topology, etc.
Yes, both Theorem 1 and Theorem 2 have constructive proofs.
In the following, I will work with rational numbers, but the same arguments work for any ordered field. (Note that in constructive logic, fields are automatically discrete -- i.e., any two elements of a field are either equal or not. Therefore, I don't think that $\mathbb{R}$ is a field for any reasonable constructive definition of $\mathbb{R}$. Perhaps $\mathbb{R}$ is some sort of "pro-field" in the same way as Laurent series over a field are; to be frank, I don't care enough about $\mathbb{R}$ to find out.)
I set $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $.
Nonnegative matrices
My main tool will be the following result from linear optimization:
Theorem 3. Let $n\in\mathbb{N}$, $n^{\prime}\in\mathbb{N}$ and $m\in\mathbb{N}$. Let $A$ be an $m\times n$-matrix. Let $A^{\prime}$ be an $m\times n^{\prime}$-matrix. Then, exactly one of the following two assertions holds:
Assertion L1: There exist two vectors $x\in\mathbb{Q}^{n}$ and $x^{\prime }\in\mathbb{Q}^{n^{\prime}}$ such that $x>0$, $x^{\prime}\geq0$ and $Ax+A^{\prime}x^{\prime}=0$.
Assertion L2: There exists a vector $y\in\mathbb{Q}^{m}$ such that $y^{T}A\geq0$, $y^{T}A^{\prime}\geq0$ and $y^{T}A\neq0$.
Theorem 3 is Theorem 2.5l in my Elementary derivations of some results of linear optimization (where I prove it for $\mathbb{R}$ instead of $\mathbb{Q}$, but the proof works over any ordered field). It seems to be due to Motzkin; it is actually a particular case of what is called "solvability of f)" on the EoM page for the Motzkin transposition theorem (note that the general case is easily seen to follow from this particular case). It is one of several results that are roughly equivalent to Farkas's lemma or the duality of linear programming. Its constructive proof (there are probably much better sources than my notes -- perhaps Motzkin's thesis?) is more or less based on Fourier-Motzkin elimination.
Recall that if $a=\left( a_{1},a_{2},\ldots,a_{n}\right) ^{T}\in \mathbb{Q}^{n}$ and $b=\left( b_{1},b_{2},\ldots,b_{n}\right) ^{T} \in\mathbb{Q}^{n}$ are two vectors of the same size, then $a<b$ means "$a_{i}<b_{i}$ for each $i\in\left\{ 1,2,\ldots,n\right\} $", whereas $a\leq b$ means "$a_{i}\leq b_{i}$ for each $i\in\left\{ 1,2,\ldots,n\right\} $". Thus, saying "$a<b$" is not the same as saying "$a\leq b$ and $a\neq b$". The pedant in me wants to add that it isn't even stronger, because the vector $0_{0}=\left( {}\right) ^{T}\in\mathbb{Q}^{0}$ (with no entries at all) satisfies $0_{0}<0_{0}$ but not $0_{0}\neq0_{0}$.
Note also that if two vectors $a$ and $b$ satisfy $a\geq b$, then $a^{T}\geq b^{T}$. The same holds for $\leq$, $>$ and $<$.
With this out of our way, let's state a few simple lemmas:
Lemma 4. Let $n\in\mathbb{N}$ and $m\in\mathbb{N}$. Let $A\in \mathbb{Q}^{n\times m}$ be a matrix whose entries are nonnegative. Let $v\in\mathbb{Q}^{m}$ be such that $v\geq0$. Then, $Av\geq0$.
Proof of Lemma 4. Clear from the definition of $Av$.
Lemma 5. Let $n$ be a positive integer. Let $A\in\mathbb{Q}^{n\times n}$ be a matrix whose entries are nonnegative. Let $y\in\mathbb{Q}^{n}$ and $z\in\mathbb{Q}^{n}$ be such that $y\geq0$, $z\geq0$, $Ay\geq y$ and $Az<z$. Then, $y=0$.
Proof of Lemma 5. Assume the contrary. Thus, $y\neq0$.
From $Az<z$, we obtain $z>Az\geq0$ (by Lemma 4). Write $y$ as $y=\left( y_{1},y_{2},\ldots,y_{n}\right) ^{T}$, and write $z$ as $z=\left( z_{1},z_{2},\ldots,z_{n}\right) ^{T}$. The coordinates $z_{1},z_{2} ,\ldots,z_{n}$ of $z$ are positive (since $z>0$), while the coordinates $y_{1},y_{2},\ldots,y_{n}$ of $y$ are nonnegative (since $y\geq0$), and at least one of them is positive (since $y\neq0$). Hence, $\max\left\{ y_{i}/z_{i}\ \mid\ i\in\left\{ 1,2,\ldots,n\right\} \right\} $ is a well-defined positive rational number. Denote this number by $\mu$. Thus, for each $j\in\left\{ 1,2,\ldots,n\right\} $, we have $y_{j}/z_{j}\leq\mu$, so that \begin{equation} y_{j}\leq z_{j}\mu. \tag{1} \label{pf.l5.0} \end{equation}
Now, write the $n\times n$-matrix $A$ in the form $A=\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq n}$. Then, $a_{i,j}\geq0$ for all $i$ and $j$ (since all entries of $A$ are nonnegative). We have $Ay\geq y$; in other words, \begin{equation} \sum_{j=1}^{n}a_{i,j}y_{j}\geq y_{i}\ \ \ \ \ \ \ \ \ \ \text{for each } i\in\left\{ 1,2,\ldots,n\right\} . \tag{2} \label{pf.l5.1} \end{equation} Also, $Az<z$; in other words, \begin{equation} \sum_{j=1}^{n}a_{i,j}z_{j}<z_{i}\ \ \ \ \ \ \ \ \ \ \text{for each } i\in\left\{ 1,2,\ldots,n\right\} . \tag{3} \label{pf.l5.2} \end{equation} Hence, for each $i\in\left\{ 1,2,\ldots,n\right\} $, we have \begin{align*} y_{i} & \leq\sum_{j=1}^{n}a_{i,j}\underbrace{y_{j}}_{\substack{\leq z_{j} \mu\\\text{(by \eqref{pf.l5.0})}}}\ \ \ \ \ \ \ \ \ \ \left( \text{by \eqref{pf.l5.1}}\right) \\ & \leq\underbrace{\sum_{j=1}^{n}a_{i,j}z_{j}}_{\substack{<z_{i}\\\text{(by \eqref{pf.l5.2})}}}\mu<z_{i}\mu\ \ \ \ \ \ \ \ \ \ \left( \text{since } \mu\text{ is positive}\right) . \end{align*} Thus, $y_{i}/z_{i}<\mu$ for each $i\in\left\{ 1,2,\ldots,n\right\} $. Hence, $\max\left\{ y_{i}/z_{i}\ \mid\ i\in\left\{ 1,2,\ldots,n\right\} \right\} <\mu$. This contradicts $\mu=\max\left\{ y_{i}/z_{i}\ \mid\ i\in\left\{ 1,2,\ldots,n\right\} \right\} $. This completes the proof of Lemma 5.
Proof of Theorem 1. The entries of the matrix $A$ are nonnegative; thus, the same holds for the matrix $A^{T}$.
As usual, let $I_{n}$ denote the $n\times n$ identity matrix. Theorem 3 (applied to $n$, $n$, $I_{n}$ and $A-I_{n}$ instead of $m$, $n^{\prime}$, $A$ and $A^{\prime}$) yields that we are in one of the following two cases:
Case 1: There exist two vectors $x\in\mathbb{Q}^{n}$ and $x^{\prime} \in\mathbb{Q}^{n}$ such that $x>0$, $x^{\prime}\geq0$ and $I_{n}x+\left( A-I_{n}\right) x^{\prime}=0$.
Case 2: There exists a vector $y\in\mathbb{Q}^{n}$ such that $y^{T} I_{n}\geq0$, $y^{T}\left( A-I_{n}\right) \geq0$ and $y^{T}I_{n}\neq0$.
Let us first consider Case 1. In this case, there exist two vectors $x\in\mathbb{Q}^{n}$ and $x^{\prime}\in\mathbb{Q}^{n}$ such that $x>0$, $x^{\prime}\geq0$ and $I_{n}x+\left( A-I_{n}\right) x^{\prime}=0$. Consider these $x$ and $x^{\prime}$. The vector $x^{\prime}$ has nonnegative coordinates (since $x^{\prime}\geq0$). From $I_{n}x+\left( A-I_{n}\right) x^{\prime}=0$, we obtain $0=I_{n}x+\left( A-I_{n}\right) x^{\prime }=x+Ax^{\prime}-x^{\prime}$, so that $Ax^{\prime}-x^{\prime}=-x<0$ (since $x>0$). In other words, $Ax^{\prime}<x^{\prime}$. Hence, $x^{\prime}\neq0$. Thus, there exists a nonzero vector $v\in\mathbb{Q}^{n}$ with nonnegative coordinates such that $Av<v$ (namely, $v=x^{\prime}$). Hence, Theorem 1 is proven in Case 1.
Let us now consider Case 2. In this case, there exists a vector $y\in \mathbb{Q}^{n}$ such that $y^{T}I_{n}\geq0$, $y^{T}\left( A-I_{n}\right) \geq0$ and $y^{T}I_{n}\neq0$. Denote this vector $y$ by $u$. Thus, $u\in\mathbb{Q}^{n}$ satisfies $u^{T}I_{n}\geq0$, $u^{T}\left( A-I_{n} \right) \geq0$ and $u^{T}I_{n}\neq0$. We have $u^{T}=u^{T}I_{n}\geq0$ and thus $u\geq0$. Also, $u^{T}=u^{T}I_{n}\neq0$ and thus $u\neq0$. Moreover, $u^{T}A-u^{T}=u^{T}\left( A-I_{n}\right) \geq0$ and thus $u^{T}A\geq u^{T}$. Taking transposes, we find $\left( u^{T}A\right) ^{T}\geq\left( u^{T}\right) ^{T}=u$, so that $u\leq\left( u^{T}A\right) ^{T}=A^{T}u$. In other words, $A^{T}u\geq u$.
Next, Theorem 3 (applied to $n$, $n$, $I_{n}$ and $A^{T}-I_{n}$ instead of $m$, $n^{\prime}$, $A$ and $A^{\prime}$) yields that we are in one of the following two subcases:
Subcase 2.1: There exist two vectors $x\in\mathbb{Q}^{n}$ and $x^{\prime }\in\mathbb{Q}^{n}$ such that $x>0$, $x^{\prime}\geq0$ and $I_{n}x+\left( A^{T}-I_{n}\right) x^{\prime}=0$.
Subcase 2.2: There exists a vector $y\in\mathbb{Q}^{n}$ such that $y^{T}I_{n}\geq0$, $y^{T}\left( A^{T}-I_{n}\right) \geq0$ and $y^{T} I_{n}\neq0$.
Let us first consider Subcase 2.1. In this subcase, there exist two vectors $x\in\mathbb{Q}^{n}$ and $x^{\prime}\in\mathbb{Q}^{n}$ such that $x>0$, $x^{\prime}\geq0$ and $I_{n}x+\left( A^{T}-I_{n}\right) x^{\prime}=0$. Consider these $x$ and $x^{\prime}$. From $I_{n}x+\left( A^{T}-I_{n}\right) x^{\prime}=0$, we obtain $0=I_{n}x+\left( A^{T}-I_{n}\right) x^{\prime }=x+A^{T}x^{\prime}-x^{\prime}$, so that $A^{T}x^{\prime}-x^{\prime}=-x<0$ (since $x>0$). In other words, $A^{T}x^{\prime}<x^{\prime}$. Hence, Lemma 5 (applied to $A^{T}$, $u$ and $x^{\prime}$ instead of $A$, $y$ and $z$) yields $u=0$. This contradicts $u\neq0$. This contradiction shows that Subcase 2.1 cannot happen.
Hence, we are in Subcase 2.2. Thus, there exists a vector $y\in\mathbb{Q}^{n}$ such that $y^{T}I_{n}\geq0$, $y^{T}\left( A^{T}-I_{n}\right) \geq0$ and $y^{T}I_{n}\neq0$. Consider this $y$. We have $y^{T}=y^{T}I_{n}\geq0$, so that $y\geq0$. We have $y^{T}=y^{T}I_{n}\neq0$, so that $y\neq0$. Thus, $y$ is a nonzero vector in $\mathbb{Q}^{n}$ with nonnegative coordinates (since $y\geq0$). We have $y^{T}A^{T}-y^{T}=y^{T}\left( A^{T}-I_{n}\right) \geq0$, so that $y^{T}A^{T}\geq y^{T}$. Taking transposes, we obtain $\left( y^{T}A^{T}\right) ^{T}\geq\left( A^{T}\right) ^{T}$. This rewrites as $Ay\geq y$. Hence, there exists a nonzero vector $v\in\mathbb{Q}^{n}$ with nonnegative coordinates such that $Av\geq v$ (namely, $v=y$). Hence, Theorem 1 is proven in Case 2.
We have now proven Theorem 1 in both Cases 1 and 2; thus we are done.
Irreducible nonnegative matrices
Next, we take aim at Theorem 2.
For any $n\times m$-matrix $A\in\mathbb{Q}^{n\times m}$ and any $i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{ 1,2,\ldots,m\right\} $, we let $A_{i,j}$ denote the $\left( i,j\right) $-th entry of $A$.
We will need the following equivalent conditions for irreducibility:
Theorem 6. Let $n$ be a positive integer. Let $A\in\mathbb{Q}^{n\times n}$ be an $n\times n$-matrix whose entries are nonnegative. Then, the following statements are equivalent:
Statement 1: The matrix $A$ is irreducible.
Statement 2: For any $i\in\left\{ 1,2,\ldots,n\right\} $ and $j\in\left\{ 1,2,\ldots,n\right\} $, there exists some $k\in\mathbb{N}$ such that $\left( A^{k}\right) _{i,j}>0$.
Statement 3: There exists some $m\in\mathbb{N}$ such that all entries of the matrix $A^{0}+A^{1}+\cdots+A^{m}$ are positive.
Statement 4: If $\left\{ U,V\right\} $ is a partition of $\left\{ 1,2,\ldots,n\right\} $ into two nonempty subsets $U$ and $V$, then there exist $u\in U$ and $v\in V$ satisfying $A_{u,v}>0$.
Theorem 6 is well-known (it is the algebraic analogue of the equivalence of different definitions of "strong connectivity" for a directed graph), and the proof you will find in the literature (e.g., on Markov chains, I believe) is already constructive. All I will need is the implication from Statement 1 to Statement 3.
We will also need the following simple lemma:
Lemma 7. Let $n\in\mathbb{N}$ and $m\in\mathbb{N}$. Let $A\in \mathbb{Q}^{n\times m}$ be a matrix whose entries are positive. Let $v\in\mathbb{Q}^{m}$ be such that $v\geq0$ and $v\neq0$. Then, $Av>0$.
Proof of Lemma 7. This is similar to Lemma 4, except that each coordinate of $Av$ is a sum of nonnegative addends containing at least one positive addend, and thus is positive.
Proof of Theorem 2. The matrix $A$ is irreducible. Thus, by Theorem 6 (more precisely, by the implication $\left( \text{Statement 1}\right) \Longrightarrow\left( \text{Statement 3}\right) $), there exists some $m\in\mathbb{N}$ such that all entries of the matrix $A^{0}+A^{1}+\cdots +A^{m}$ are positive. Consider this $m$, and set $B=A^{0}+A^{1}+\cdots +A^{m}\in\mathbb{Q}^{n\times n}$. Then, all entries of the matrix $B$ are positive.
Theorem 1 shows that there exists a nonzero vector $v\in\mathbb{Q}^{n}$ with nonnegative coordinates such that either $Av\geq v$ or $Av<v$. Consider such a $v$, and denote it by $w$. Thus, $w\in\mathbb{Q}^{n}$ is a nonzero vector with nonnegative coordinates and satisfies either $Aw\geq w$ or $Aw<w$.
Define a vector $v\in\mathbb{Q}^{n}$ by $v=Bw$. We shall show that $v$ is a nonzero vector with positive coordinates such that either $Av>v$ or $Av=v$ or $Av<v$. This will clearly prove Theorem 2.
We have $w\geq0$ (since the vector $w$ has nonnegative coordinates) and $w\neq0$ (since $w$ is nonzero). Hence, Lemma 7 (applied to $n$, $B$ and $w$ instead of $m$, $A$ and $v$) yields that $Bw>0$. Thus, $v=Bw>0$. In other words, $v$ is a vector with positive coordinates. Hence, $v$ is nonzero (since $n>0$).
We have $B=A^{0}+A^{1}+\cdots+A^{m}$. Thus, $AB=A^{1}+A^{2}+\cdots+A^{m+1} =BA$. Now, recall that either $Aw\geq w$ or $Aw<w$. Hence, we are in one of the following three cases:
Case 1: We have $Aw\geq w$ and $Aw\neq w$.
Case 2: We have $Aw=w$.
Case 3: We have $Aw<w$.
Let us first consider Case 1. In this case, we have $Aw\geq w$ and $Aw\neq w$. In other words, $Aw-w\geq0$ and $Aw-w\neq0$. Hence, Lemma 7 (applied to $n$, $B$ and $Aw-w$ instead of $m$, $A$ and $v$) yields $B\left( Aw-w\right) >0$. From $v=Bw$, we obtain \begin{equation} Av-v=\underbrace{AB}_{=BA}w-Bw=BAw-Bw=B\left( Aw-w\right) >0. \end{equation} In other words, $Av>v$. Thus, in Case 1, we have proven on our goal (namely, that either $Av>v$ or $Av=v$ or $Av<v$).
Let us next consider Case 2. In this case, we have $Aw=w$. Now, from $v=Bw$, we obtain $Av=\underbrace{AB}_{=BA}w=B\underbrace{Aw}_{=w}=Bw=v$. Thus, in Case 2, we have proven on our goal (namely, that either $Av>v$ or $Av=v$ or $Av<v$).
Let us finally consider Case 3. In this case, we have $Aw<w$. Hence, $w-Aw>0$. Therefore, $w-Aw\geq0$ and $w-Aw\neq0$. Hence, Lemma 7 (applied to $n$, $B$ and $w-Aw$ instead of $m$, $A$ and $v$) yields $B\left( w-Aw\right) >0$. From $v=Bw$, we obtain \begin{equation} v-Av=Bw-\underbrace{AB}_{=BA}w=Bw-BAw=B\left( w-Aw\right) >0. \end{equation} In other words, $Av<v$. Thus, in Case 3, we have proven on our goal (namely, that either $Av>v$ or $Av=v$ or $Av<v$).
We have now proven our goal in each of the three Cases 1, 2 and 3. Thus, we always have either $Av>v$ or $Av=v$ or $Av<v$. As we have said, this proves Theorem 2.