Principal ideal ring, does there exist an invertible matrix such that certain matrix is upper triangular?

The answer to your initial question is yes, and Igor Rivin's comment is right and good. The result follows from the fact that a commutative PIR (Principal Ideal Ring) with identity is a Hermite ring in the sense of I. Kaplansky.

Indeed, a Hermite ring possesses a trigonal reduction as required, cf. [1, Theorem 3.5]. The fact that a commutative PIR with identity is a Hermite ring is given by [1, Theorem 12.3 and the subsequent remark] and can also be deduced from Hungerford's theorem [Theorem 1, 3] for instance (quotients and direct products of a Hermite rings are still Hermite). A PIR is actually an elementary divisor ring in the sense of I. Kaplansky (still a consequence of [Theorem 1, 3]), which is even stronger as it enjoys diagonal reduction. This is stated without proof in [Notes on Chapter I, 4].

Regarding rschwieb's answer, this approach fails already for a PID (Principal Ideal Domain) if you restrict to elementary row reduction. Indeed, you cannot row-reduce every $2$-by-$1$ matrices with coefficients in $R = \mathbf{Z}[\frac{1 + \sqrt{-19}}{2}]$ to a trigonal form $\begin{pmatrix}* \\ 0 \end{pmatrix}$, since otherwise $R$ would be a $GE$-ring in the sense of P. M. Cohn and it isn't [2, Introduction and Theorem 6.1]. Still, the approach is successful for a Euclidean ring $R$, as we have $SL_n(R) = E_n(R)$ in this case. Here $E_n(R)$ denotes the group generated by the elementary matrices, i.e., those matrices which induce the elementary row reduction moves preserving the determinant.

Thus row reduction, i.e., multiplication on the left by elementary matrices, is not sufficient to reduce a matrix to a trigonal form in a PID. However, reduction to a trigonal form is always achievable in PIDs if you allow multiplication by any invertible matrix on the left, which is your original problem. This is a subpart of the Smith Normal Form Theorem, the result rightly referred to by rschwieb and Igor Rivin. It is presented in some classical text books, e.g., "Abstract Algebra" of D. S. Dummit and R. M. Foote, which also handles Dedekind domains. Other readers may point you to sources of a more algorithmic nature; the Euclidean case is advisable as it is algorithmic by nature and far less technical than the PID case. I found the wiki page on the Smith normal Form pretty good algorithm-wise. It goes into the detail of not-so-trivial induction arguments based on the number of prime divisors of coefficients, something that is missing in the outline of rschwieb. (You'll see there that the matrix $L_0 \in SL_2(R)$ used in step II is not necessarily an elementary matrix).

The most general class of rings with reduction to a trigonal form is, by definition, the class of Hermite rings in the sense of Kaplansky. For a modern treatment see [4], where the author refers to K-Hermite rings. As stated above, this class includes PIRs. In my humble opinion, the issue with zero divisors was not a minor one.


[1] "Elementary divisors and modules", I. Kaplansky, 1949
[2] "On the structure of the $GL_2$ of a ring", P. M. Cohn, 1966.
[3] "On the structure of principal ideal rings", T. W. Hungerford, 1968.
[4] "Serre's problem on projective modules", T. Y. Lam, 2006.


An elementary matrix is a $p\times p$ matrix of the form $$ E^a_{i,j} = I + a e_{i,j}, $$ where $I$ is the identity matrix, $a$ is any element in your ring, $i\neq j$, and $e_{i,j}$ is the standard matrix unit. It is easy to see that $E^a_{i,j}$ is always invertible and that its inverse is $E^{-a}_{i,j}$. The other crucial property of this matrix is that, for every $p\times q$ matrix $A$, the product $E^a_{i,j}A$ coincides with the matrix you get by multiplying the $j^{th}$ row of $A$ by $a$, and adding it to the $i^{th}$ row. This is therefore one of the elementary operations used to perform row reduction.

The other important operation used in row reduction, namely exchange of rows, is also easily seen to be obtained by left multiplying $A$ by a suitable invertible matrix. Thus, if you can transform $A$ into an upper triangular matrix using a finite number of elementary operations, you may also left multiply $A$ by the same number of invertible matrices, obtaining an upper triangular matrix, say $$ U_1U_2\ldots U_nA=T, $$ from where your conclusion follows easily.