Why does the Euclidean algorithm for finding GCD work?

I will try to explain

why the Euclidean algorithm for finding the GCD of two numbers always works

by using a standard argument in number theory: showing that a problem is equivalent to the same problem for smaller numbers.

Start with two numbers $a > b \ge 0$. You want to know two things:

  1. their greatest common divisor $g$,

  2. and how to represent $g$ as a combination of $a$ and $b$

It's clear that you know both of these things in the easy special case when $b = 0$.

Suppose $b > 0$. The divide $a$ by $b$ to get a quotient $q$ and a remainder $r$ strictly smaller than $b$: $$ a = bq + r. \quad \text{(*)} $$

Now any number that divides both $a$ and $b$ also divides $r$, so divides both $b$ and $r$. Also any number that divides both $b$ and $r$ also divides $a$, so divides both $a$ and $b$. That means that the greatest common divisor of $a$ and $b$ is the same as the greatest common divisor of $b$ and $r$, so (1) has the same answer $g$ for both those pairs.

Moreover, if you can write $g$ as a combination of $b$ and $r$ then you can write it as a combination of $a$ and $b$ (substitute in (*)). That means if you can solve (2) for the pair $(b,r)$ then you can solve it for the pair $(a,b)$.

Taken together, this argument shows that you can replace your problem for $(a,b)$ by the same problem for the smaller pair $(b,r)$. Since the problem can't keep getting smaller forever, eventually you will reach $(z, 0)$ and you're done.