Singular values of sequence of growing matrices
First let's change the matrix $H$ to $H=\frac{1}{2}\left(\begin{array}{cc}I_2 & I_2 \\ I_2& P_2\end{array}\right)$, where $P_2$ is a 2x2 permutation matrix. This swapping of the rows of $H$ won't affect the singular values of the matrices. Now lets consider what affect the iteration has on a matrix of the form $\left(\begin{array}{cc}X &x \\ X &y\end{array}\right)$, where $X$ has $n$ columns. At the next iteration we will have $$ \frac{1}{2}\left(\begin{array}{cccc}X &x & X &x \\ X &y& X & y\\ X&x &X & y\\ X & y& X & x\end{array}\right)$$ To account for the linear dependence of the columns, we multiply on the right by
$$\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}I_n & 0 &0 \\ 0 & 1& 0 \\ \frac{1}{\sqrt{2}}I_n & 0 &0 \\ 0 & 0& 1\end{array}\right)$$ The resulting matrix is
$$ \frac{1}{2}\left(\begin{array}{ccc}\sqrt{2}X &x &x \\ \sqrt{2}X &y& y\\ \sqrt{2}X&x & y\\ \sqrt{2}X & y& x\end{array}\right)$$
We note that if the original matrix had orthonormal columns then so will this matrix.
This accounts for why the rank of the matrix increases by one with each iteration. Also note that if $$x=y$$ then the rank will always be one. If we do this reduction at each step, you find that for the starting $K_1=[1;0]$ the matrix $G=K_L^TK_L$ appears to be in the limit a Hankel matrix plus a diagonal matrix with diagonal entries $G_{ii}=(\sqrt{2})^{-2i})$ and off diagonal entries $G_{ij}=(\sqrt{2})^{-i-j-2}$. This gives you a very nice matrix but it is only for this particluar starting vector. If you use a starting vector of $K_1=[1 ;1]/\sqrt{2}$ then $K_L$ will always have rank one and have as it's largest singular value one.
Based on the observation above, consider the matrix $$K_1=\frac{1}{\sqrt{2}}\left(\begin{array}{rr} 1 & 1\\ 1 & -1 \end{array}\right)\left(\begin{array}{c}\alpha \\ \beta\end{array}\right)$$ such that $$K_1$$ has a 2-norm of one. Applying the iteration to this matrix we have
$$K_2=\frac{1}{2\sqrt{2}}\left(\begin{array}{rrrr} 1 & 1 & 1 &1 \\ 1 & -1 &1 &-1\\ 1& 1 & 1&-1\\1& -1 & 1 &1 \end{array}\right)\left(\begin{array}{cc}\alpha &0\\ \beta & 0 \\ 0 & \alpha \\ 0& \beta\end{array}\right)$$
Accounting for the nullspace as outlined above requires multiplying by the transpose of the matrix used to remove the nullspace.
$$\left(\begin{array}{cccr}
\frac{1}{\sqrt{2}}I_1 & 0 &\frac{1}{\sqrt{2}}I_1&0\\
0& 1& 0&0 \\
0 & 0& 0 & 1\\
\end{array}\right) \left(\begin{array}{cc}\alpha &0\\ \beta & 0 \\ 0 & \alpha \\ 0& \beta\end{array}\right) $$
We then have for $K_2$,
$$
\frac{1}{2\sqrt{2}}\left(\begin{array}{rrr} \sqrt{2} & 1 &1 \\ \sqrt{2} & -1 &-1\\ \sqrt{2}& 1 & -1\\ \sqrt{2}& -1 & 1 \end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} \alpha & \frac{1}{\sqrt{2}}\alpha\\ \beta & 0 \\ 0 & \beta\end{array}\right)
$$
Noticing that the matrix on the left has orthogonal matrix, we make a modification so that the matrix will have orthonormal columns which means that we only need to understand what the iteration is doing to the matrix on the right.
$$
\frac{1}{2}\left(\begin{array}{rrr} 1 & 1 &1 \\ 1 & -1 &-1\\ 1& 1 & -1\\ 1& -1 & 1 \end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} \alpha & \frac{1}{\sqrt{2}}\alpha\\ \frac{1}{\sqrt{2}}\beta & 0 \\ 0 & \frac{1}{\sqrt{2}}\beta\end{array}\right)
$$.
All these same steps can be taken at the next iteration. The resulting $\alpha$,$\beta$ matrix will be
$$K_3=\left(\begin{array}{cccc}\frac{1}{2}\alpha &\frac{1}{2}\alpha &\frac{1}{2}\alpha &\frac{1}{2}\alpha \\ \frac{1}{{2}}\beta&0 &\frac{1}{{2}}\beta&0\\0&\frac{1}{{2}}\beta & 0 &0\\ 0 & 0&0&\frac{1}{{2}}\beta\end{array}\right)$$
In general the $\alpha$,$\beta$ matrix will have the form $$\left(\begin{array}{c}a^T\\B\\b^T\end{array}\right)$$ with the $\alpha$,$\beta$ matrix for the next iteration being $$\frac{1}{\sqrt{2}}\left(\begin{array}{c}a^T &a^T\\B&B\\b^T& 0\\0 & b^T\end{array}\right)$$ The matrix $B$ and vector $b$ depend only on $\beta$ and at the $L$th iteration it will have entries of zero and $\beta 2^{\frac{1-L}{2}}$. At the $L$th iteration the vector $a$ has entries $\alpha 2^{\frac{1-L}{2}}$. The matrix $B$ will have $L-1$ rows. The $i$th row of $B$ will have $2^{L-i-1}$ nonzero entries and rows of $B$ will be orthogonal to one another as well as to the vector $b$. The matrix will be $2^{L-1}$ wide. The singular values of the $\alpha$,$\beta$ matrix determine the singular values of $K_L$. Due to the structure described multiplying the $\alpha$,$\beta$ matrix times its transpose yields an arrowhead matrix
$$\left(\begin{array}{ccccc}\alpha^2 & \alpha\beta 2^{-1} & \alpha\beta 2^{-2}& \alpha\beta 2^{-3}& \ldots \\ \alpha\beta 2^{-1} & \beta^2 2^{-1} & \ &\ & \ \\ \alpha\beta 2^{-2} & & \beta^2 2^{-2} & &\\ \vdots & &&\ddots &\\ \end{array}\right)$$ whose eigenvalues are the roots of $$f(\lambda)=\alpha^2-\lambda-\sum_{i=1}^\infty\frac{\frac{\alpha^2\beta^2}{2^{2i}}}{\frac{\beta^2}{2^i}-\lambda}$$
Revised to include details of general $\alpha$
While not a complete answer, here are some ideas that may be of interest.
Define $\theta = (1+\alpha)^2$.
To ease notation, I'll write $K_L$ to mean $K_L(\alpha)$. Some experimentation reveals a nice pattern satisfied by $M_L := K_L^TK_L$. In particular, we see that $M_L$ is comprised of two numbers, $\frac{\theta}{2^L}$ and $\frac{2\theta}{2^L}$, which occur in a well-structured pattern. We seek to bound the Perron-Frobenius root, say $\rho$ of the nonnegative matrix $M_L$. Looking at the pattern of $M_L$ we see that the smallest possible column sum of $M_L$ happens when $\theta$ arises $2^{L-1}-1$ times and $2\theta$ arises once. Thus, we see that \begin{equation*} \rho \ge (2^{L-1}-1)\frac{\theta}{2^L} + \frac{2\theta}{2^L} = \theta\left(\frac{1}{2}+\frac{1}{2^L}\right). \end{equation*}
Similarly, by looking at the pattern, we see that the largest row-sum happens when $\theta$ appears $2^{L-2}$ times and $2\theta$ appears $2^{L-1}-2^{L-2}$ times (noting that $M_L$ is of size $2^{L-1} \times 2^{L-1}$). Thus, we see that \begin{equation*} \rho \le 2^{L-2}\frac{\theta}{2^L} + (2^{L-1}-2^{L-2}) \frac{2\theta}{2^L} = \frac{3}{4}\theta. \end{equation*}
Example, when $\alpha=0$, then $\theta=1$, and the upper bound on $\sigma_1(K_L) = \sqrt{\rho} \le \sqrt{3/4}$, otherwise, we get \begin{equation*} \sigma_1(K_L) \le (1+\alpha)\sqrt{\frac{3}{4} }. \end{equation*}
As I noted previously, a more careful analysis is needed to refine these bounds and to characterize how the limiting value is achieved (perhaps we can leverage $K_L$ being a column stochastic matrix?), but this bound is numerically not too bad. For example, a quick numerical experiment shows that $\sigma_1(K_L(0)) \to 0.8254....$ very rapidly (already for $K_7$, the first 4 digits match).