Is there a faster way to calculate a few diagonal elements of the inverse of a huge symmetric positive definite matrix?
I suggest you look at the book Matrices, Moments, and Quadrature with Applications. The basic idea is that you have a $n \times n$ positive semi-definite matrix $A$, and you wish to compute the quadratic form $u^T f(A)u$, where $u \in \mathbb{R}^n$. In your case you wish to compute $$ A^{-1}_{ii} = e_i^T f(A)e_i, $$ where $e_i \in \mathbb{R}^n$ is a vector of all zeros, with a 1 in the $i$th position, and the function $f$ is defined as $f(\lambda) = \lambda^{-1}$.
You can turn the quadratic form into an Stieltjes integral, since $$ u^T f(A)u = u^T Q^T f(\Lambda) Qu = \alpha^T f(\Lambda) \alpha = \sum_{i=1}^n f(\lambda_i) \alpha_i^2 = \int_a^b f(\lambda) d\alpha(\lambda), $$ where the eigenvalue decomposition of $A$ is given by $A = Q\Lambda Q^T$ and $\Lambda$ is a diagonal matrix containing the eigenvalues of $A$, and the vector $\alpha = Qu$. The integral can be estimated using Gaussian quadrature via $$ I[f] = \int_a^b f(\lambda) d \alpha(\lambda) = \sum_{i=1}^N \omega_i f(t_i) + R[f], $$ where $\{\omega_i\}_{i=1}^N$ are a set of weights, and $\{t_i\}_{i=1}^N$ are a set of nodes at which to evaluate the function $f$, and $R[f]$ is a remainder or error term. The values of the $\omega$'s and $t$'s are unknown and must be solved for. The values for the weights may be computed via an iterative algorithm similar to the Lanczos algorithm for computing eigenvalues. The values of the nodes may be obtained from components of an eigenvector of a matrix derived from $A$. This computation may be done efficiently. For more details see the book, as well as this lecture on the topic by James Lambers.
The underlying mathematics and linear algebra may seem a little scary at first, but I assure you this leads to a fairly simple and efficient algorithm. I wrote Matlab code to calculate the weights and nodes for a class in graduate school. It wasn't very difficult. Take a look at the book. Good luck.
I suggested that a partial Cholesky decomposition should be possible here, but based on a comment I received, I started thinking about my answer again and realised that my suggestion was incorrect. Apologies if I lead anyone astray.
You will need to use a full Cholesky decomposition, but the problem to deduce the inverse can be reduced in scale to save redundant computation.
If your SPD matrix $\mathbf{A}$ is written in block form as
$\mathbf{A}=\begin{bmatrix} \mathbf{A}_{00} & \mathbf{A}_{10}^T \\ \mathbf{A}_{10} & \mathbf{A}_{11} \end{bmatrix}$
with $\mathbf{A}_{00}$ being a $3\times3$ SPD block, its inverse will be given by
$\mathbf{A}^{-1}=\begin{bmatrix} \mathbf{B}_{00} & \mathbf{B}_{10}^T \\ \mathbf{B}_{10} & \mathbf{B}_{11} \end{bmatrix}$
The equivalent Cholesky decomposition of $\mathbf{A}$ is then given by
$\mathbf{R}=\begin{bmatrix} \mathbf{R}_{00} & \\ \mathbf{R}_{10} & \mathbf{R}_{11} \end{bmatrix}$
The resulting matrix equation to solve for the inverse block matrix $\mathbf{B}_{00}$ can then be reduced to
$\begin{bmatrix} \mathbf{R}_{00} & \\ \mathbf{R}_{10} & \mathbf{R}_{11} \end{bmatrix} \begin{bmatrix} \mathbf{R}_{00}^T & \mathbf{R}_{10}^T\\ & \mathbf{R}_{11}^T \end{bmatrix} \begin{bmatrix} \mathbf{B}_{00} \\ \mathbf{B}_{10} \end{bmatrix}=\begin{bmatrix}\mathbf{I}_{00} \\ \mathbf{0} \end{bmatrix}$
and solved via
$\begin{bmatrix} \mathbf{R}_{00} & \\ \mathbf{R}_{10} & \mathbf{R}_{11} \end{bmatrix} \begin{bmatrix} \mathbf{X}_{0} \\ \mathbf{X}_{1} \end{bmatrix} =\begin{bmatrix}\mathbf{I}_{00} \\ \mathbf{0} \end{bmatrix}$
and
$\begin{bmatrix} \mathbf{R}_{00}^T & \mathbf{R}_{10}^T\\ & \mathbf{R}_{11}^T \end{bmatrix} \begin{bmatrix} \mathbf{B}_{00} \\ \mathbf{B}_{10} \end{bmatrix}=\begin{bmatrix}\mathbf{X}_{0} \\ \mathbf{X}_{1} \end{bmatrix}$
The solution you are looking for will be the diagonal entries of $\mathbf{B}_{00}$. The right hand side of this expression is only $n\times 3$, and this combined with lower triangular structure of $\mathbf{R}$ should be about the most efficient way to get the solution you require.
An alternative could be the (preconditioned) conjugate gradient method for solving Ax = b with b = (1,0,0,...). Then $(A^{-1})_{11}$ is the first entry of x