How to take the gradient of the quadratic form?
The only thing you need to remember/know is that $$\dfrac{\partial (x^Ty)}{\partial x} = y$$ and the chain rule, which goes as $$\dfrac{d(f(x,y))}{d x} = \dfrac{\partial (f(x,y))}{\partial x} + \dfrac{d( y^T(x))}{d x} \dfrac{\partial (f(x,y))}{\partial y}$$ Hence, $$\dfrac{d(b^Tx)}{d x} = \dfrac{d (x^Tb)}{d x} = b$$
$$\dfrac{d (x^TAx)}{d x} = \dfrac{\partial (x^Ty)}{\partial x} + \dfrac{d (y(x)^T)}{d x} \dfrac{\partial (x^Ty)}{\partial y}$$ where $y = Ax$. And then, that is,
$$\dfrac{d (x^TAx)}{d x} = \dfrac{\partial (x^Ty)}{\partial x} + \dfrac{d( y(x)^T)}{d x} \dfrac{\partial (x^Ty)}{\partial y} = y + \dfrac{d (x^TA^T)}{d x} x = y + A^Tx = (A+A^T)x$$
Let $f : \mathbb R^n \to \mathbb R$ be defined by
$$f (\mathrm x) := \rm x^\top A \, x$$
Hence,
$$f (\mathrm x + h \mathrm v) = (\mathrm x + h \mathrm v)^\top \mathrm A \, (\mathrm x + h \mathrm v) = f (\mathrm x) + h \, \mathrm v^\top \mathrm A \,\mathrm x + h \, \mathrm x^\top \mathrm A \,\mathrm v + h^2 \, \mathrm v^\top \mathrm A \,\mathrm v$$
Thus, the directional derivative of $f$ in the direction of $\rm v$ at $\rm x$ is
$$\lim_{h \to 0} \frac{f (\mathrm x + h \mathrm v) - f (\mathrm x)}{h} = \mathrm v^\top \mathrm A \,\mathrm x + \mathrm x^\top \mathrm A \,\mathrm v = \langle \mathrm v , \mathrm A \,\mathrm x \rangle + \langle \mathrm A^\top \mathrm x , \mathrm v \rangle = \left\langle \mathrm v , \color{blue}{\left(\mathrm A + \mathrm A^\top\right) \,\mathrm x} \right\rangle$$
Lastly, the gradient of $f$ with respect to $\rm x$ is
$$\nabla_{\mathrm x} \, f (\mathrm x) = \color{blue}{\left(\mathrm A + \mathrm A^\top\right) \,\mathrm x}$$
multivariable-calculus scalar-fields gradient quadratic-forms
There is another way to calculate the most complex one, $\frac{\partial}{\partial \theta_k} \mathbf{x}^T A \mathbf{x}$. It only requires nothing but partial derivative of a variable instead of a vector.
This answer is for those who are not very familiar with partial derivative and chain rule for vectors, for example, me. Therefore, although it seems long, it is actually because I write down all the details. :)
Firstly, expanding the quadratic form yields: $$ \begin{align} f = \frac{\partial}{\partial \theta_k} \mathbf{x}^T A \mathbf{x} &= \frac{\partial}{\partial \theta_k} \sum_{i=1}^N \sum_{j=1}^N a_{ij}\frac{\partial}{\partial \theta_k}(\mathbf{x}_i \mathbf{x}_j) \end{align} $$ Since $$ \frac{\partial}{\partial \theta_k}(\mathbf{x}_i \mathbf{x}_j) = \begin{cases} \mathbf{x}_j, && \text{if } k = i \\ \mathbf{x}_i, && \text{if } k = j \\ 0, && \text{otherwise} \end{cases} $$ The equation is nothing but $$ f = \sum_{j=1}^N a_{kj} \mathbf{x}_j + \sum_{i=1}^N a_{ik} \mathbf{x}_i $$ Almost done! Now we only need some simplification. Recall the very simple rule that $$ \sum_{i=1}^N x_i y_i = \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}^T \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix} = \mathbf{x}^T \mathbf{y} $$ Thus $$ \begin{align} f &= \text{(k-th row of A) } \mathbf{x} + \text{(k-th column of A)}^T \mathbf{x} \end{align} $$ Now it is time to compute the gradient from partial derivative! $$ \begin{align} \nabla_\mathbf{x} \mathbf{x}^T A \mathbf{x} & = \begin{bmatrix} \frac{\partial \mathbf{x}^T A \mathbf{x}}{\partial x_1} \\ \vdots \\ \frac{\partial \mathbf{x}^T A \mathbf{x}}{\partial x_k} \\ \vdots \\ \frac{\partial \mathbf{x}^T A \mathbf{x}}{\partial x_N} \\ \end{bmatrix} = \begin{bmatrix} \vdots \\ \text{(k-th row of A) } \mathbf{x} + \text{(k-th column of A)}^T \mathbf{x} \\ \vdots \end{bmatrix} \\ &= \left( \begin{bmatrix} \vdots \\ \text{(k-th row of A) } \\ \vdots \end{bmatrix} + \begin{bmatrix} \vdots \\ \text{(k-th column of A) }^T \\ \vdots \end{bmatrix} \right) \mathbf{x} \\ &= (A + A^T)\mathbf{x} \end{align} $$ So we are done!! The answer is: $$ \nabla_\mathbf{x} \mathbf{x}^T A \mathbf{x} = (A + A^T)\mathbf{x} $$