Differentiate expressions involving symmetric matrix $\mathbf{D}=\mathrm{diag}(\tau)\Omega\mathrm{diag}(\tau)$ with respect to element of $\tau$
For the first formula, we seem to have: \begin{gather*} |\mathbf{D}| = |\Omega|\tau_1^2\tau_2^2\cdots\tau_q^2, \\ \therefore\ \log|\mathbf{D}| = \log|\Omega| + 2\log\tau_1 + 2\log\tau_2 + \cdots + 2\log\tau_q, \\ \therefore\ \frac{\partial\log|\mathbf{D}|}{\partial\tau_g} = \frac2{\tau_g} \quad (g = 1, \ldots, q). \end{gather*} For the second formula, let $\mathbf{b}^{\mathrm{T}} = (b_1, \ldots, b_q),$ and write $\Omega^{-1} = (c_{hg})_{1\leqslant h \leqslant q, 1 \leqslant g \leqslant q}.$ That is, for $g = 1, \ldots, q,$ let the $g^\text{th}$ column of $\Omega^{-1}$ be $(c_{hg})_{1\leqslant h \leqslant q}^{\mathrm{T}}.$ After a page of messy calculation, I arrive at the formula: $$ \frac{\partial(\mathbf{b}^{\mathrm{T}}\mathbf{D}^{-1}\mathbf{b})}{\partial\tau_g} = -\frac{2b_g}{\tau_g^2}\sum_{h=1}^q\frac{b_hc_{hg}}{\tau_h}. $$
In its dependence on $\tau_g,$ the expression $\mathbf{b}^{\mathrm{T}}\mathbf{D}^{-1}\mathbf{b}$ determines a function $f \colon \mathbb{R}_{>0} \to \mathbb{R},$ where \begin{align*} f(\tau_g) & = \mathbf{b}^{\mathrm{T}}\mathbf{D}^{-1}\mathbf{b} \\ & = \mathbf{b}^{\mathrm{T}}\operatorname{diag}(\tau)^{-1} \Omega^{-1}\operatorname{diag}(\tau)^{-1}\mathbf{b} \\ & = \left(\frac{\mathbf{b}}{\tau}\right)^{\mathrm{T}} \!\Omega^{-1}\left(\frac{\mathbf{b}}{\tau}\right). \end{align*} Here, for the sake of brevity, I have written: $$ \frac{\mathbf{b}}{\tau} = \left( \frac{b_1}{\tau_1}, \ldots, \frac{b_q}{\tau_q}\right)^{\operatorname{T}}. $$
Probably the most sensible way to prove the result is by applying the Chain Rule to the decomposition $f(t) = \beta(\alpha(t)),$ where $\alpha \colon \mathbb{R}_{>0} \to \mathbb{R},$ $t \mapsto b_g/t,$ and $\beta \colon \mathbb{R} \to \mathbb{R},$ $x \mapsto (\mathbf{a} + x\mathbf{d})^{\mathrm{T}}\Omega^{-1}(\mathbf{a} + x\mathbf{d}),$ where $\mathbf{a} = (b_1/\tau_1, \ldots, 0, \ldots, b_q/\tau_q)^{\mathrm{T}}$ and $\mathbf{d} = (0, \ldots, 1, \ldots, 0)^{\mathrm{T}}.$ Such a proof only requires differentiating a quadratic function of $x$, and it has enough structure to inspire a feeling of confidence in the result. Personally, however, I still prefer a more advanced proof, using only the familiar formula $\frac{d}{dt}\frac1t = -\frac1{t^2},$ in conjunction with general results about the Fréchet derivatives of linear and bilinear maps. This gives the final formula more directly, but it requires careful handling, in order to avoid creating thickets of LISP-like nested parentheses (as in my original "messy" handwritten proof). Although it probably can't be recommended objectively, I can't resist giving it here. The function $f$ is expressed in quite a simple way as a composite of four functions, \begin{gather*} f \colon \mathbb{R}_{>0} \xrightarrow{\alpha} \mathbb{R} \xrightarrow{\gamma} \mathbb{R}^q \xrightarrow{\delta} \mathbb{R}^q \times \mathbb{R}^q \xrightarrow{\epsilon} \mathbb{R}, \\ \alpha(t) = \frac{b_g}t \quad (t > 0), \\ \gamma(u) = \mathbf{a} + u\mathbf{d} = \left(\frac{b_1}{\tau_1}, \ldots, u, \ldots, \frac{b_q}{\tau_q}\right)^{\mathrm{T}} \quad (u \in \mathbb{R}), \\ \delta(x) = (x, x) \quad (x \in \mathbb{R}^q), \\ \epsilon(x, y) = x^{\mathrm{T}}\Omega^{-1}y, \quad (x, y \in \mathbb{R}^q). \end{gather*} Clearly, $$ \gamma(\alpha(\tau_g)) = \frac{\mathbf{b}}{\tau}. $$ By the usual rules of differentiation for functions $\mathbb{R}_{>0} \to \mathbb{R},$ $$ \alpha'(t)(k) = -\frac{b_gk}{t^2} \quad (t > 0, \ k \in \mathbb{R}). $$ Because $\gamma$ is the sum of constant and linear mappings, $$ \gamma'(u)(s) = s\mathbf{d} \quad (u, s \in \mathbb{R}). $$ By the Chain Rule, $$(\gamma \circ \alpha)'(t) = \gamma'(\alpha(t)) \circ \alpha'(t) \quad (t > 0). $$ That is, \begin{align*} (\gamma \circ \alpha)'(t)(k) & = \gamma'(\alpha(t))(\alpha'(t)(k))\\ & = \alpha'(t)(k)\mathbf{d} \\ & = -\frac{b_gk}{t^2}\mathbf{d} \quad (t > 0, \ k \in \mathbb{R}). \end{align*} Because $\delta$ is linear, $$ \delta'(x)(v) = \delta(v) \quad (x, v \in \mathbb{R}^q). $$ Because $\epsilon$ is bilinear and symmetric, \begin{align*} \epsilon'(x, y)(v, w) & = \epsilon(x, w) + \epsilon(v, y) \\ & = \epsilon(x, w) + \epsilon (y, v) \quad (x, y, u, v \in \mathbb{R}^q). \end{align*} By the Chain Rule, $$ (\epsilon \circ \delta)'(x) = \epsilon'(\delta(x)) \circ \delta'(x) \quad (x \in \mathbb{R}^q). $$ That is, \begin{align*} (\epsilon \circ \delta)'(x)(v) & = \epsilon'(\delta(x))(\delta'(x)(v)) \\ & = \epsilon'(\delta(x))(\delta(v)) \\ & = \epsilon'(x, x)(v, v) \\ & = 2\epsilon(x, v) \quad (x, v \in \mathbb{R}^q). \end{align*} By the Chain Rule again, $$ f'(t) = (\epsilon \circ \delta)'(\gamma(\alpha(t))) \circ (\gamma \circ \alpha)'(t) \quad (t > 0). $$ That is, \begin{align*} f'(t)(k) & = (\epsilon \circ \delta)'(\gamma(\alpha(t))) ((\gamma \circ \alpha)'(t)(k)) \quad (t > 0, \ k \in \mathbb{R}). \end{align*} Therefore, \begin{align*} f'(\tau_g)(k) & = (\epsilon \circ \delta)' \left(\frac{\mathbf{b}}{\tau}\right) \left(-\frac{b_gk}{t^2}\mathbf{d}\right) \\ & = 2\epsilon\left(\frac{\mathbf{b}}{\tau}, -\frac{b_gk}{\tau_g^2}\mathbf{d}\right) \\ & = -\frac{2b_gk}{\tau_g^2}\epsilon\left( \frac{\mathbf{b}}{\tau}, \mathbf{d}\right) \\ & = -\frac{2b_gk}{\tau_g^2} \left( \frac{b_1}{\tau_1}, \ldots, \frac{b_q}{\tau_q}\right) \Omega^{-1}\mathbf{d} \\ & = -\frac{2b_gk}{\tau_g^2} \left(\frac{b_1}{\tau_1}, \ldots, \frac{b_q}{\tau_q}\right) (c_{1g}, \ldots, c_{qg})^{\mathrm{T}} \\ & = -\frac{2b_gk}{\tau_g^2}\sum_{h=1}^q\frac{b_hc_{hg}}{\tau_h} \quad (k \in \mathbb{R}). \end{align*}
To conclude, here are the remaining details of the more "sensible" (although in my opinion less intuitive) proof that was outlined earlier. By the Chain Rule (it is no longer expressed in terms of Fréchet derivatives, of course): $$ f'(t) = \beta'(\alpha(t)) \cdot \alpha'(t) \quad (t > 0). $$ We have already differentiated $\alpha,$ thus: $$ \alpha'(t) = -\frac{b_g}{t^2} \quad (t > 0). $$ We can also easily differentiate $\beta,$ because it is a quadratic function. Using the symmetry of $\Omega,$ we have: \begin{align*} \beta(x) & = (\mathbf{a} + x\mathbf{d})^{\mathrm{T}}\Omega^{-1} (\mathbf{a} + x\mathbf{d}) \\ & = (\mathbf{d}^{\mathrm{T}}\Omega^{-1}\mathbf{d})x^2 + (\mathbf{d}^{\mathrm{T}}\Omega^{-1}\mathbf{a})x + (\mathbf{a}^{\mathrm{T}}\Omega^{-1}\mathbf{d})x + \mathbf{a}^{\mathrm{T}}\Omega^{-1}\mathbf{a} \\ & = (\mathbf{d}^{\mathrm{T}}\Omega^{-1}\mathbf{d})x^2 + 2(\mathbf{a}^{\mathrm{T}}\Omega^{-1}\mathbf{d})x + \mathbf{a}^{\mathrm{T}}\Omega^{-1}\mathbf{a} \quad (x \in \mathbb{R}), \end{align*} therefore \begin{align*} \beta'(x) & = 2(\mathbf{d}^{\mathrm{T}}\Omega^{-1}\mathbf{d})x + 2(\mathbf{a}^{\mathrm{T}}\Omega^{-1}\mathbf{d}) \\ & = 2(\mathbf{a} + x\mathbf{d})^{\mathrm{T}}\Omega^{-1}\mathbf{d} \quad (x \in \mathbb{R}). \end{align*} But $$ \mathbf{a} + \alpha(\tau_g)\mathbf{d} = \frac{\mathbf{b}}{\tau}, $$ therefore $$ f'(\tau_g) = -\frac{2b_g}{\tau_g^2} \left(\frac{\mathbf{b}}{\tau}\right)^{\mathrm{T}} \!\!\Omega^{-1}\mathbf{d}, $$ the same expression that emerged more transparently from the other proof.