Quotient field operations are well-defined: fleshing out Vinberg's sketch
Recall that the scaling relation $\,\sim:\,$ is defined as $\, (a,b) \sim: (c,d)\iff (c,d) = (ea,eb)\,$ for some $\,e\neq 0,\,$ i.e. $\,\large \frac{a}b \sim: \frac{e\,a}{e\,b}.\,$ They have equal cross-multiples $\,eab\,$ so $\,f\sim:g\,\Rightarrow\, f\sim g.$
The Lemma in the prior question shows that every cross-multiplication equivalence $\,f_1\sim f_2\,$ can be decomposed into a pair of scaling relations, i.e. $\,f_1\sim f_2\iff f_1\sim:f:\sim f_2\ $ for some $\,f,\,$ i.e. $\,f_1,\,f_2\,$ are cross-multiplication equivalent $\iff$ they have a common scaling $\,f.\,$
Thus it suffices to prove that addition and multiplication are compatible with the scaling relation, which follows from scaling symmetry of the addition & multiplication formulas due to their linear form, i.e. $\, s(f_1)\sim: \color{#c00}e\,s(f_1) = s(\color{#c00}ef_1) = s(f)\,$ below, where we prove compatibility for the first argument of addition using the sum function$\ s(x) := x + g_1,\, $ for $\,g_1 = (c,d).$
$\ \ \ \ \ \ \ \begin{align}f_1 + g_1\ \ \ \ \ &\sim: \ \ \ \ \ f + g_1 \\[.2em] f_1 \ \ \ \sim:\ \ \ \ f \ \ \ \ \, \smash[t]{\color{#0a0}{\overset{\rm C}\Longrightarrow}}\, \ \ \ \ \ \ \ \ s(f_1)\ \ \ \ \ \ \ & \sim:\ \ \ \ \ \ \ s(f)\\[.2em] \ {\rm i.e.}\ \ \ \ (a,b)\sim:(ea,eb)\,\Rightarrow\, (a,b)+(c,d)&\sim: (\color{#c00}ea,\color{#c00}eb)+(c,d)\ \ = \ s(\color{#c00}ef_1) \\[.2em] {\rm by}\ \ \ \ (ad\!+\!cb,\,bd) &\sim: (\color{#c00}ead\!+\!\color{#c00}ecb,\,\color{#c00}ebd)\ \ = \ \color{#c00}e\,s(f_1) \end{align}\ \ \ \ \ \qquad$
${\rm Then}\ \ f_1\sim f_2\,\Rightarrow\, s(f_1)\sim s(f_2)\,$ follows by applying $\,\smash[t]{\color{#0a0}{\overset{\rm C}\Rightarrow}}\,$ to a $\,\sim:\,$ decomposition of $\, f_1 \sim f_2\,$
$\ \ \ \ \ \ \ \ \ \, f_1\sim f_2\,\Rightarrow\begin{align}f_1\sim: f\\[.2em] f_2\sim: f\end{align}$ $\:\color{#0a0}{\overset{\rm C}\Rightarrow}\,\begin{align}s(f_1)\sim: s(f)\\[.2em] s(f_2)\sim: s(f)\end{align}$ $\,\Rightarrow\begin{align}s(f_1)\sim s(f)\\[.2em] s(f_2)\sim s(f)\end{align}$ $\,\color{#08ff}\Rightarrow\! \begin{align} s(f_1)\,&\sim\, s(f_2),\,\ {\rm i.e.}\\[.2em] f_1+g_1&\sim \color{#08f}{f_2+g_1}\end{align}$
Similarly (or using symmetry and commutativity) we get $\ g_1\sim g_2\,\Rightarrow\, \color{#08f}{f_2+g_1}\sim f_2+ g_2\,$ thus
$\rm\color{#08f}{transitivity}$ of $\,\sim\,$ yields $\,\ \ f_1\sim f_2,\ g_1\sim g_2\,\Rightarrow\, f_1+g_1\sim f_2+g_2\qquad $
which means $\,\sim\,$ is compatible with addition. Multiplication compatibility follows similarly.
Remark $ $ These tedious proofs are usually "left to the reader" in most expositions. One can avoid this by instead using a more algebraic construction of fraction rings via quotients of polynomial rings, where we adjoin an inverse $\,x_a\,$ for each $\,a\neq 0\,$ via extension rings $\, A_j[x_a]/(ax_a-1).\,$
In this approach the proofs follow immediately from universal properties of polynomial and quotient rings. The two approaches are related by the fact that the fraction pairs correspond to normal forms in these quotients rings, where every element is equivalent to a monomial $\,a\, x_{a_1}\cdots x_{a_k}\,$ (essentially by choosing a $ $ common "denominator"), $ $ denoted by the $ $ "fraction" $\,a/(a_1\cdots a_k)\,$ or, set-theoretically, by the pair $\,(a,\,a_1\cdots a_k),\,$ analogous to Hamilton's pair-representation of complex numbers $\,(a,b),\,$ corresponding to normal forms (least degree reps) $\,a+bx\,$ in $\,\Bbb R[x]/(x^2\!+1)\cong C.\,$ For more on this viewpoint see here (there we consider a more general construction (localization) which inverts elements in some specified subset $\,S\subseteq A)$