Why is a leading digit not counted as a significant figure if it is a 1?
Significant figures are a shorthand to express how precisely you know a number. For example, if a number has two significant figures, then you know its value to roughly $1\%$.
I say roughly, because it depends on the number. For example, if you report $$L = 89 \, \text{cm}$$ then this implies roughly that you know it's between $88.5$ and $89.5$ cm. That is, you know its value to one part in $89$, which is roughly to $1\%$.
However, this gets less accurate the smaller the leading digit is. For example, for $$L = 34 \, \text{cm}$$ you only know it to one part in $34$, which is about $3\%$. And in the extreme case $$L = 11 \, \text{cm}$$ you only know it to one part in $11$, which is about $10\%$! So if the leading digit is a $1$, the relative uncertainty of your quantity is actually a lot higher than naively counting the significant figures would suggest. In fact, it's about the same as you would expect if you had one fewer significant figure. For that reason, $11$ has "one" significant figure.
Yes, this rule is arbitrary, and it doesn't fully solve the problem. (Now instead of having a sharp cutoff between $L = 9$ cm and $L = 10$ cm, you have a sharp cutoff between $L = 19$ cm and $L = 20$ cm.) But significant figures are a bookkeeping tool, not something that really "exists". They're defined just so that they're useful for quick estimates. In physics, at least, when we start quibbling over this level of detail, we just abandon significant figures entirely and do proper error analysis from the start.
This isn't an actual rule. And as some people point out in the comments, it's not even mentioned in the Wikipedia article on significant digits. The rule applies to $0$, not to $1$.
Simple counter-example: $10$. Would the authors claim that this number has no significant digits?
You can verify this by doing a search for "sig fig counter." All of them should tell you that the number in your question has 4 significant figures.
As others note, this boundary condition is clearly arbitrary. But it needs to be consistent across literature, or else confusion abounds when you're working with others. So I'd say ignore the rule.
Truncating numbers to a certain precision is completely arbitrary. There's no reason not to make it more arbitrary.
It seems like someone didn't like the step in precision between 9.99 and 10.0 so they moved it to between 19.99 and 20.0.
In any field where results are clustered around a power of 10, doing this may be beneficial.