Convergence of the powers of a Markov transition matrix
Based on the OP we restrict attention to Markov Chains with finite state space $\Bbb X=\{1,2,\dots,N\}$. There are two conditions (i) irreducibility and (ii) aperiodicity which determine the convergence of the transition matrices $A^n$. But firstly lets introduce some minimal (nowhere exhaustive) standard notation that is common in almost all textbook or references about Markov Chains (see also here):
- Transition probabilities: $p^{(n)}_{ij}$ denotes the probability of going from state $i$ to state $j$ in $n$ time steps. In particular $p^{(1)}_{ij}$ or simply $p_ij$ is defined as $$p_{ij}=P(X_1=i \mid X_0=j)$$
- Stationary distribution: A probability vector $π$ that satisfies $π=πP$ is called a stationary distribution. To avoid a sloppy exposition I skip the details and let you read yourself about it.
- Hitting times/ probabilities: Let $T_1:=\inf\{n\ge 0: X_n=1\}$ be the first time that the chain will land in state $1$ and hence remain at it thereafter (as it is absorbing). Accordingly, define $$h^{1}_k:=\Bbb P(T_1<T_6\mid X_0=k)$$ to be the probability that the chain will be absorbed by state $T_1$ starting from state $k \in \mathbb X$.
- Irreducibility: A Markov chain is said to be irreducible if its state space is a single communicating class; in other words, if it is possible to get to any state from any state. This is not the case here, as your chain has $3$ communicating classes: $C_1=\{1\}, C_2=\{6\}$ which are closed (absorbing states) and $C_2=\{2,3,4,5\}$ which is open or transient.
- (A)periodicity: A state $i$ has period $k$ if any return to state $i$ must occur in multiples of $k$ time step. If $k=1$ then the state is called aperiodic. If $p_{ii}>0$ then state $i$ and hence all states in its communicating class are aperiodic. This is the case here for all states.
Case 1: irreducible and aperiodic. If a Markov Chain $(X_n)_{n\in \Bbb N}$ with finite state space is (i) irreducible and (ii) aperiodic, then $$\lim_{n\to+\infty} p_{ij}^n=π_j$$ (where I use the usual notation for the transition probabilities and the stationary distribution). So, the question is what happens when either/both of these conditions are violated:
Case 2: irreducible and periodic. If the chain has period $d$ then for every pair $i,j \in \Bbb X$ there exists $0\le r\le d-1$ such that $$\lim_{n\to+\infty}p_{ij}^{md+r}=dπ_j$$ and $p_{ij}=0$ for all $n$ such that $n\neq r \mod d$.
Case 3: reducible. In this case, $π$ is not uniquely defined. For example in the OP we have that any $λπ^1+(1-λ)π^6$ is a stationary distribution for the whole chain, with $π^1(k)=δ_{1k}$ and $π^6=δ_{6k}$ for any $k\in \Bbb X$. Still, he following holds $$\lim_{n\to+\infty}p_{ij}^n=h_i^Cπ_j$$ where $h_i^C$ denotes the hitting probability of the closed Class $C$ with $j\in C$, starting from state $i$. (This is the case in the OP, see calculations below).
Case 3: reducible and periodic. In this case, you only need to adjust Cases $2$ and $3$ to fit the description of any particular chain.
To obtain the limiting matrix in the given example computationally (and not only numerically as you have it), denote the state space of the chain with $\Bbb X=\{1,2,3,4,5,6\}$ and let $h^{1}_k:=\Bbb P(T_1<T_6\mid X_0=k)$ be the probability that he chain will be absorbed by the absorbing state $1$ instead of $6$. Formally $T_1$ is defined as $T_1:=\inf\{n\ge 0:X_n=1\}$ and similarly $T_6$. Then, solving the following system \begin{align}h^{1}_1&=1\\[0.1cm]h^{1}_2&=\tfrac12h^{1}_1+\tfrac12h^{1}_3\\h^{1}_3&=\tfrac14h^{1}_1+\tfrac14h^{1}_3+\tfrac12h^{1}_4\\h^{1}_4&=\tfrac18h^{1}_1+\tfrac18h^{1}_3+\tfrac14h^{1}_4+\tfrac12h^{1}_5\\h^{1}_5&=\tfrac1{16}h^{1}_1+\tfrac1{16}h^{1}_3+\tfrac18h^{1}_4+\tfrac14h^{1}_5+\tfrac12h^{1}_6\\[0.1cm]h^{1}_6&=0\end{align} yields the first column of the limiting matrix as you have it, i.e. $$\left(h^{1}_1,h^{1}_2,h^{1}_3,h^{1}_4,h^{1}_5,h^{1}_6\right)=\frac1{10}(10,8,6,4,2,0)$$ Since the chain will eventually get absorbed by either state $1$ or state $6$ the last column of the limiting matrix follows by complementarity.