Explanation of specific parts of Urysohn's Lemma proof
You have a bunch of questions, and probably the best way to clarify them is for you to read the book with in good faith and figure what he author is saying. But I will try to address your concerns.
Answer to Q.1. I do not think you need to say w.l.o.g. first, to choose the $V_{r_i}$. "W.l.o.g." is reserved for a different context, different types of proofs, e.g. given any infinite sequence of reals there is a theorem that it must contain either a non-decreasing or a non-increasing subsequence. You don't know which (unless you know more about the sequence), but you could always say: W.l.o.g let's us assume that the given sequence contains a non-decreasing sequence. That may help you prove something, e.g. if the original sequence was bounded, now you would have a bounded monotone, and hence convergent subsequence. So you could prove that every bounded sequence has a convergent subsequence, even if you can't tell whether it would have a non-decreasing sequence, but you may w.l.o.g. assume that, since the other option would be that is has a non-increasing sequence, and this case would be treated in a similar way. So, "w.l.o.g." is reserved for cases when you have two (or more) options, but they could all be treated simultaneously as if it was only one option, because they are similar.
I ended up stating and proving Theorem 2.7 below again, after overlooking the link provided by OP. (I just realized Theorem 2.7 is clickable in your question, but at any rate I wrote my version here and will keep it here, with notation that I like better, using $W$ for the open set that is "inserted" between a compact $C$ and open $U$. I prefer to denote the compact set in Theorem 2.7 by $C$, since $K$ is already used in the statement of Urysohn's Lemma).
Theorem 2.7. If $C$ is compact and $U$ is open with
$C\subset U$ then there is an open $W$ with $C\subset W\subseteq\overline W\subset U$, and $\overline W$ compact.
Proof. For each $x\in C$ there is a neighborhood $U_x$ with $x\in U_x \subset\overline U_x\subset U$ and such that $\overline U_x$ is compact. Since $C$ is compact, there are finitely many $x_1,...,x_m\in C$ such that $C\subset(U_{x_1}\cup...\cup U_{x_m})$. We could let $W=(U_{x_1}\cup...\cup U_{x_m})$. Note that $\overline W=\overline{(U_{x_1}\cup...\cup U_{x_m})} = (\overline U_{x_1} \cup...\cup\overline U_{x_m})\subset U$, and $\overline W$ is compact, as the union of finitely many compact sets. Q.E.D.
In the present proof you use induction to conclude that the $V_{r_i}$ could be chosen in "such a manner". The author starts with choosing $V_0$ and $V_1$, or more precisely we apply Theorem 2.7 for the very first time to construct $V_0$ (using the compact $K$ and open $V$), such that $K\subset V_0\subset\overline V_0\subset V$, and $\overline V_0$ is compact. (That is, $C,U$ and $W$ from Theorem 2.7, as stated above, match $K,V$ and $V_0$ from the proof of Urysohn's Lemma.) Then we apply 2.7 to construct $V_1$ (using the compact $K$ and open $V_0$) such that $K\subset V_1\subset\overline V_1\subset V_0$, and $\overline V_1$ is compact. So far so good, since $r_1=0<1=r_2$ and $\overline V_1\subset V_0$. Then we apply 2.7 to construct $V_{r_3}$ (using the compact $\overline V_1$ and open $V_0$) such that $\overline V_1\subset V_{r_3}\subset\overline V_{r_3}\subset V_0$, and $\overline V_{r_3}$ is compact. In general, we apply 2.7 to construct $V_{r_{n+1}}$ (using the compact $\overline V_{r_j}$ and open $V_{r_i}$) such that $\overline V_{r_j}\subset V_{r_{n+1}}\subset\overline V_{r_{n+1}}\subset V_{r_i}$, and $\overline V_{r_{n+1}}$ is compact.
More precisely, if $n\ge2$ you would use all the sets $V_{r_k}$ with $k\le n$ (not just $V_{r_n}$) and following the author (and the induction hypothesis), there is a largest $r_i<r_{n+1}$ and a smallest $r_j>r_{n+1}$ (where $i,j$ are between $1$ and $n$), and (by construction so far) $\overline V_{r_j}\subset V_{r_i}$, and $\overline V_{r_j}$ is compact. We need to choose $V_{r_{n+1}}$ such that $\overline V_{r_j}\subset V_{r_{n+1}}\subset \overline V_{r_{n+1}}\subset V_{r_i}$, and $V_{r_{n+1}}$ is compact. Given (by induction hypothesis) that $\overline V_{r_j}\subset V_{r_i}$ and that $\overline V_{r_j}$ is compact, we use Theorem 2.7 with $C=\overline V_{r_j}\subset V_{r_i}=U$ to find $W=V_{r_{n+1}}$ with $\overline V_{r_j}\subset V_{r_{n+1}}\subset \overline V_{r_{n+1}}\subset V_{r_i}$.
Answer to Q.2. So you know that the $\overline V_{r_k}$ are compact because you construct them this way, using Theorem 2.7. You use the induction hypothesis that the sets $V_{r_k}$ already constructed have compact closures, in particular $\overline V_{r_j}$ is compact, and then use Theorem 2.7 to construct (or pick) a new $V_{n+1}$ such that $\overline V_{n+1}$ is compact.
Ok, I see there is a potential source of confusion here. You wrote $V_j$, I assume you meant $V_{r_j}$. Just to make sure you realize that, let us assume (w.l.o.g. :) that $r_3=\frac12$. Here $n+1=3$, $r_j=r_2=1$, $r_i=r_1=0$. So then, once $V_1=V_{r_2}$ and $V_0=V_{r_1}$ are chosen with $\overline V_1\subset V_0$, then we pick $V_{r_3}=V_{\frac12}$ with $\overline V_1\subset V_{\frac12}\subset\overline V_{\frac12}\subset V_0$. You could write this alternatively as $\overline V_{r_2}\subset V_{r_3}\subset\overline V_{r_3}\subset V_{r_1}$, or also as $\overline V_{r_j}\subset V_{r_{n+1}}\subset \overline V_{r_{n+1}}\subset V_{r_i}$. But for most of the proof, the notation used would look more like $\overline V_1\subset V_{\frac12}\subset\overline V_{\frac12}\subset V_0$. So, if (w.l.o.g., just for an illustration) say $r_4=\frac56$ then we would have at the next step of the induction that $\overline V_{r_2}\subset V_{r_4}\subset\overline V_{r_4}\subset V_{r_3}$, that is $\overline V_1\subset V_{\frac56}\subset\overline V_{\frac56}\subset V_{\frac12}$. Note here that $n+1=4$, $r_j=r_2=1$, and $r_i=r_3={\frac12}$, and we could write the above as $\overline V_{r_j}\subset V_{r_{n+1}}\subset \overline V_{r_{n+1}}\subset V_{r_i}$. (You may want to figure the next step, if say $r_5=\frac13$, to practice with these indices.)
Answer to Q.3. It is enough to mention that $f$ is lower-semicontinuous and $g$ is upper-semicontinuous because the author goes on to show (further down in the proof) that $f=g$, hence $f$ is simultaneously lower-semicontinuous and upper-semicontinuous, and therefore continuous. (Exercise.)
Answer to Q.4. We assume $K$ is the one from the statement of Usysohn's Lemma. Your proof that $\{x:f(x)\neq 0\}\subset V_0$ (and the conclusion from there that $f$ has it's support in $\overline V_0$) is correct. The only minor quibble, your "quantifiers" are a bit unclear, I would suggest be more specific and explicitly say that "there exists" an $r$ with $x\in V_r$, and $f(x)\ge f_r(x)=r>0$. Where would that $r$ come from? Since $0<\frac{f(x)}2<f(x)$ there must be an $r$ with $\frac{f(x)}2\le f_r(x)$, whew too much writing, I got lost myself, will post this for now.
Edit addressing the comment below (but eventually I incorporated this edit in the answer above):
@AaronMartinez we apply 2.7 for the very first time to construct $V_0$ (using the compact $K$ and open $V$), such that $K\subset V_0\subset\overline V_0\subset V$, and $\overline V_0$ is compact. Then we apply 2.7 to construct $V_1$ (using the compact $K$ and open $V_0$) such that $K\subset V_1\subset\overline V_1\subset V_0$, and $\overline V_1$ is compact. Then we apply 2.7 to construct $V_{r_3}$ (using the compact $\overline V_1$ and open $V_0$) such that $\overline V_1\subset V_{r_3}\subset\overline V_{r_3}\subset V_0$, and $\overline V_{r_3}$ is compact. In general, we apply 2.7 to construct $V_{r_{n+1}}$ (using the compact $\overline V_{r_j}$ and open $V_{r_i}$) such that $\overline V_{r_j}\subset V_{r_{n+1}}\subset\overline V_{r_{n+1}}\subset V_{r_i}$, and $\overline V_{r_{n+1}}$ is compact.
If you study Rudin's proof of Urysohn’s Lemma and want to describe what it is doing at the start, you might come up with
The proof begins by invoking Theorem 2.7 a countably infinite number of times to construct a sequence of open sets, while 'tracking' the rational number 'assignments' using an 'interpolation' technique to keep the chain of open sets 'sorted'.
The proof is also implicitly using the Axiom of dependent choice, $\mathsf {DC}$, to construct this sequence of open sets. The OP should be aware that the construction of the chain of open sets
$$\tag 1 V_1 \subset \dots \subset V_\alpha \subset \dots \subset V_\beta \subset \dots \subset V_0 \text{ where } \alpha,\beta \in \{r_n\} \text{ and } \alpha \gt \beta $$
is accomplished by creating a function(sequence) using recursion/choice. If the chain has length $n$, you can increase the length by $1$ while keeping that rational number 'link-up' in place. Recursion lets you finish up with a countable family $(V_n)_{n\ge0}$, and after 'sorting', you have a two-sided chain as expressed by $\text{(1)}$.
Usually (almost always?) when defining a sequence using recursion the index set consists of the natural numbers. But in Rudin's recursion step, we find
Suppose $n \ge 2$ and $V_{r_1}, V_{r_2}, \dots, V_{r_n}$ have been chosen in such a manner that...
It might be helpful (more rigorous?) to think of each of the $V \text{ sets }$ as having two subscripts, the rational number as well as the $n$ corresponding to its recursive construction.
The recursion also starts off with an 'initialized state' for the first two terms of the sequence, $V_{r_1} = V_0$ and $V_{r_2} = V_1$. Also, the way the recursion works, all the prior terms of the sequence are inputs to form the next 'interpolation' term (no Fibonacci here!). The final output structure of the recursion, the infinite chain $\text{(1)}$ with some concomitant properties, is all that is needed to create the desired continuous function.
It is helpful to imagine rational numbers like $\frac{1}{2}$,$\frac{1}{4}$,$\frac{3}{4}$, etc. 'feeding the recursion' and creating the next term of the sequence as well as the chain relation in $\text{(1)}$.
To picture how the proof creates the continuous function $f$, try to 'see it over' the $\text{(1)-chain expression}$ as a decreasing function that starts at $1$ over $V_1$ and goes to $0$ over $V_0$. The picture will 'look like' an increasing continuous function
$\quad g: [0,1] \text{ (reverse the (1)-chain)} \to [0,1]$.
But of course the topological space $X$ can be finite in which case this 'picture' is not accurate (see example in next section).
In the proof you will see the use of the $\text{supremum }$ function, and it is how you can define the desired function $f$ while guaranteeing that is continuous.
Example 1: Let $X =\{0,1\}$ be the discreet topological space containing $2$ points. Let $K = \{1\}$ and $V = X$. Using the argument found in Rudin's proof of Urysohn's Lemma, depending on the choices you make, exactly two continuous function can be created
$\quad {\chi}_K \text{ and } {\chi}_X$
In this case using the rational number enumeration $(r_n)$ leads to a bit of 'wheel spinning'.