Can one force the electric quadrupole moments of a neutral charge distribution to vanish using a suitable translation?
Sometimes you can
The obvious example is a purely dipolar charge which has been displaced from the origin, such as a dipolar gaussian $$ \rho(\mathbf r) =p_z(z-z_0)\frac{e^{-(\mathbf{r}-\mathbf{r}_0)^2/2\sigma^2}}{\sigma^5(2\pi)^{3/2}} . $$ This system is neutral, and it has a nonzero dipole moment $p_z=\int z\rho(\mathbf r)\mathrm d\mathbf r$ along the $z$ axis. It also has a nonzero quadrupole moment; for example, $$ Q_{zz} =\int \left(z^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r =\frac{4}{3}p_z z_0. $$ However, this is obviously trivially fixable by putting the origin at $\mathbf r_0$, in which case the charge density becomes purely dipolar and all other multipole moments will vanish. This is obviously a slightly contrived example, but it shows that it is indeed possible, in general, for charge densities where the quadrupole moments are originally nonzero but can nevertheless be transformed away by a suitable translation.
Sometimes you can't
Unfortunately, this is not always the case. As a counterexample, consider the charge density $$ \rho(\mathbf r) =\left( \frac{p_x x}{\sigma^2} + \frac{Q_{xx}(x^2-y^2)}{2\sigma^4} \right) \frac{e^{-\mathbf{r}^2/2\sigma^2}}{\sigma^3(2\pi)^{3/2}}. $$ Here $\rho(\mathbf r)$ is neutral, it has a nonvanishing dipole moment $\mathbf p=\int \mathbf r\rho(\mathbf r)\mathrm d\mathbf r=p_x\hat{\mathbf x}$, and it has nonzero quadrupole moments $$ Q_{xx} =-Q_{yy} =\int \left(x^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r $$ along the $x$ and $y$ axes. (Of course, $Q_{zz}$ and all off-diagonal elements are zero.)
Here you have essentially one quadrupole moment to cancel out, and one nonzero dipole moment component with which to do so, so if all you're doing is counting linearly independent components then it looks like you have a good chance. Unfortunately, this doesn't work.
To be more specific, take a passive approach where you keep $\rho(\mathbf r)$ as it is and you calculate the new multipole moments about a different origin, $$ Q'_{ij} =\int \left((x_i-x_i^{(0)})(x_j-x_j^{(0)})-\frac{(\mathbf r-\mathbf r_0)^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r. $$ In these conditions the moments transform as follows: \begin{align} Q'_{xx} & = Q_{xx} -\frac{4}{3} p_x x_0, \\ Q'_{yy} & = -Q_{xx} +\frac{2}{3} p_x x_0,\\ Q'_{zz} & =\frac{2}{3} p_x x_0. \end{align} It is therefore possible to choose the new origin's $x$ coordinate $x_0$ such that $Q'_{xx}$ vanish, but it is patently impossible to make all the components vanish. This completes the proof: some systems are not susceptible to this sort of simplification, and the subleading term in the multipole expansion is always $1/r$ behind the leading term instead of being reducible to $1/r^2$.
As to how this looks, here are some plots of our counterexample $\rho(\mathbf r)$ as the parameters go from purely dipolar to purely quadrupolar and back:
None of the displayed distributions (with the exception of the extremes) can be brought to dipole + hexadecapole form via a translation. The image link should lead to an interactive version of this animation, but it's rather sluggish so give it some time.
And you can tell the difference
OK, so the result is sometimes true and sometimes not, which does very little to help us understand what's going on here. So, let's have a closer look and characterize the cases where it is and isn't possible to transform away the quadrupole components, and that will help us understand why some work and some don't.
The trick here (and here I thank my undergraduate supervisor, Eduardo Nahmad-Achar, for the crucial suggestion) is to boil down the quadrupole moment tensor to its essential components. $Q_{ij}$ is an intimidating quantity, but looks more complicated than it is. In particular, we're defining it as $$ Q_{ij}=\int \left(x_ix_j -\delta_{ij}\frac{r^2}{3}\right) \rho(\mathbf r)\mathrm d \mathbf r, $$ so it is easy to see that it is a second-rank tensor with respect to rotations, and that it is symmetric and traceless (in the sense that $\sum_iQ_{ii}=0$). The symmetry takes the number of independent components down from nine to six, and the tracelessness from there to five, but in fact you can go lower: because the tensor is symmetric, it can always be diagonalized, so the off-diagonal components can be set to zero with a suitably aligned set of axes. The tracelessness then reduces $Q_{ij}$ to only two independent components in this frame.
So, let's start by setting up a general framework for our transformed quadrupole moments. Using the passive transformation from before, and expanding the square, we get \begin{align} Q'_{ij} & = \int \left( \left(x_i-x_i^{(0)}\right)\left(x_j-x_j^{(0)}\right)-\delta_{ij}\frac{(\mathbf r-\mathbf r_0)^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = \int \left( x_ix_j-x_ix_j^{(0)}-x_jx_i^{(0)}+x_i^{(0)}x_j^{(0)}-\delta_{ij}\frac{r^2 - 2\mathbf r\cdot\mathbf r_0+r_0^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = Q_{ij} -x_j^{(0)}p_i-x_i^{(0)}p_j + \frac23 \delta_{ij}\mathbf{r}_0\cdot\mathbf p , \end{align} where the constant terms in $x_i^{(0)}x_j^{(0)}$ and in $\frac13\delta_{ij} r_0^2$ cancel out because the system is neutral. This system splits into two types of equations: three with $i\neq j$, and three with $i=j$, which look relatively different: \begin{align} Q'_{ij} &= Q_{ij}- p_ix_j^{(0)}-p_jx_i^{(0)}\text { for }i\neq j,\text{ and}\\ Q'_{ii} &= Q_{ii}-2\left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right). \end{align} Here the goal is to have the primed moments vanish, and without loss of generality we've put ourselves in a frame where the off-diagonal moments, $Q_{ij}$ for $i\neq j$, vanish to begin with. That means, then, that our equations really read \begin{align} p_ix_j^{(0)}+p_jx_i^{(0)} & = 0\text { for }i\neq j,\text{ and}\\ \left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right) & = \frac12 Q_{ii}. \end{align} These two sets are rather different, and it's better to tackle them separately.
Let's start, then, with the diagonal set of equations, \begin{align} \left(p_i\hat{\mathbf e}_i -\frac13 \mathbf p\right)\cdot \mathbf r_0 = \frac12 Q_{ii}. \end{align} which looks nice enough. However, there's a problem with this set, in that it has a built-in linear dependence that comes from the innate tracelessness of our original quadrupole moments. This becomes somewhat clearer when expressed in coordinate form as \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ -\frac13 p_x x^{(0)} - \frac13 p_y y^{(0)} + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} in that if you add all the three equations you get identically zero; that is, any equation is (the negative of) the sum of the other two. The solution space of each of this set of equations is a plane in $\mathbf r_0$ space, and we are guaranteed that if two of the planes intersect then the other plane must also coincide with that intersection.
In addition, we also know that sometimes these equations can fail to have solutions and that therefore the planes can be parallel. Indeed, we see this in our 'sometimes you can't' example, where the equations reduce to \begin{align} \frac23 p_x x^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & = - \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & =0, \end{align} which is not a consistent set of equations, and describes three parallel planes normal to the $x$ axis.
On the other hand, since the system is linearly dependent then we also know that if a solution does exist, then it will have a degeneracy of at least dimension $1$ along the common line of intersection of the planes. Finding the direction of this line tells us a lot about the planes, and we can get it directly as the cross product of any two normal vectors, so, for definiteness, \begin{align} \mathbf n & = \left(p_1\hat{\mathbf e}_1 -\frac13 \mathbf p\right) \times \left(p_2\hat{\mathbf e}_2 -\frac13 \mathbf p\right) \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13p_1\hat{\mathbf e}_1\times(p_2\hat{\mathbf e}_2+p_3\hat{\mathbf e}_3) -\frac13p_2(p_1\hat{\mathbf e}_1+p_3\hat{\mathbf e}_3)\times\hat{\mathbf e}_2 \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13(p_1p_2\hat{\mathbf e}_3-p_1p_3\hat{\mathbf e}_2) -\frac13(p_1p_2\hat{\mathbf e}_3-p_2p_3\hat{\mathbf e}_1) \\ & = \frac13\left[\vphantom{\sum} p_1p_2\hat{\mathbf e}_3+p_2p_3\hat{\mathbf e}_1+p_3p_1\hat{\mathbf e}_2 \right]. \end{align} This is a really cool result, and as a bonus we know that we get the same vector for the pairs $(2,3)$ and $(3,1)$ of planes (and $-\mathbf n$ of the reversed pairs).
This then gives us a first criterion: if, in an eigenframe of the original quadrupole moment, any two components of the dipole moment are nonzero, then the planes describing the equations for the vanishing of the diagonal components will intersect in a line. If only one of these components is nonzero, then the planes are parallel, and there will only be solutions if all the planes coincide.
Having said all this, the analysis above is not enough to solve the problem, because we still need to follow up on what happens with the off-diagonal moments, which were originally zero because of the frame we chose but need to remain at zero. Those equations read \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} + p_y z^{(0)} & = 0 \\ p_z x^{(0)} \phantom{+p_i x^{(0)}} + p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0, \end{align} and being a homogeneous system they're somewhat easier to handle: zero is always a solution, and they only have nonzero solutions (which we do want, or our translation by $\mathbf r_0$ doesn't do anything) if their determinant $$ \det\begin{pmatrix} 0 & p_z & p_y \\ p_z & 0 & p_x \\ p_y & p_x & 0 \end{pmatrix} =2p_xp_yp_z $$ vanishes. That puts us in a bit of a conundrum, though, because we wanted to have nonvanishing dipole components in this frame, which means that (as we really should have expected) the system is riding the edge of solvability.
We have, therefore, two conflicting demands on how many components of the dipole are allowed to vanish in this frame. They can't all be nonzero (or the determinant above kills the off-diagonal equations) and they can't all be zero (by initial hypothesis of nonzero $\mathbf p$), so that leaves two distinct cases
Exactly one component of the dipole vanishes. Choosing that component as $p_z$, our equations read in component form \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} & = \frac12 Q_{yy}, \end{align} and \begin{align} \phantom{p_i x^{(0)} + 0 y^{(0)} + } p_y z^{(0)} & = 0 \\ \phantom{ 0 x^{(0)} +p_i x^{(0)} + } p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0; \end{align} and from the latter we get that $z^{(0)}=0$. To get the other two components, we fist solve the first set of equations, which are now guaranteed a unique solution, \begin{align} \begin{pmatrix} x^{(0)} \\ y^{(0)} \end{pmatrix} & = \frac32 \begin{pmatrix} 2 p_x & - p_y \\ - p_x & 2 p_y \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} 2 p_y & p_y \\ p_x & 2 p_x \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} , \end{align} and then comes the final test - whether this solution satisfies the final equation, \begin{align} 0 &= p_y x^{(0)}+ p_x y^{(0)} = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y & p_x\end{pmatrix} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} \\ & = \frac{3}{10p_xp_y} \bigg[ p_y^2(2Q_{xx} + Q_{yy}) + p_x^2(Q_{xx} + 2 Q_{yy}) \bigg]. \end{align} Since both $p_x$ and $p_y$ need to be nonzero here, both of the coefficients are forced to vanish, and since that reads in matrix form as $$ \begin{pmatrix} 2 & 1 \\ 1 & 2\end{pmatrix} \begin{pmatrix} Q_{xx} \\ Q_{yy} \end{pmatrix}=0, $$ with a nonsingular constant matrix, then we're forced to conclude that $Q_{xx}=0=Q_{yy}$ and that therefore $Q_{zz}=-Q_{xx}-Q_{yy}$ also vanishes.
That is, this option doesn't work: it requires that our original quadrupoles be zero to begin with, and the whole thing collapses to zero.
The other option, then is that exactly two components of the dipole vanish. Putting our only remaining component along $z$, our equations read \begin{align} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} and \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} \phantom{+ p_y z^{(0)}}& = 0 \\ p_z x^{(0)} \phantom{ + p_i x^{(0)} + p_x z^{(0)} } & = 0. \end{align} Here the second set requires that we remain on the symmetry axis of the dipole, i.e. $x^{(0)}=y^{(0)}=0$, and the first set gives us a nonzero displacement $z^{(0)}$ along that axis - together with a strong (but not fully stifling) constraint on the quadrupole moments, namely, that $$ Q_{xx} = Q_{yy} = -\frac12 Q_{zz}. $$ In other words, the quadrupole moment needs to be zonal (as opposed to tesseral or sectoral) and it needs to have full axial symmetry.
This means that our initial 'sometimes you can' example was completely representative and it exhausted, in its essence, all the possible scenarios. To phrase the theorem in full, then, we have:
Summary
Given a neutral charge distribution with nonzero dipole and quadrupole moments, it is possible to choose an origin about which all quadrupole moments $Q'_{ij}$ vanish if and only if the quadrupole moment tensor is cylindrically symmetric - i.e., if and only if two of its eigenvalues are equal - and the dipole moment lies along its axis of symmetry.
This is really restrictive (particularly compared to the absolutely-no-qualms case one order above), but hey, that's life.
This answer supersedes two previous, incorrect versions, which are accessible via the revision history.
I think it's easier to see in Cartesian coordinates. Define the "primitive" moments
\begin{align} q & = \int \rho(\mathbf r)\, \text{d}^3\mathbf{r}\\ p_i & = \int r_i \rho(\mathbf r)\, \text{d}^3 \mathbf{r}\\ Q_{ij} & = \int r_i r_j \rho(\mathbf r)\, \text{d}^3 \mathbf{r} \end{align}
Assume $q = 0$, $p_i$ not all 0.
If you displace the origin by $\mathbf{d}$ and call the new quadrupole moments $Q'_{ij}$, then
\begin{align} Q'_{ij} & = \int (r_i - d_i)(r_j - d_j) \rho \text{d}^3 \mathbf{r}\\ &= Q_{ij} - p_i d_j - p_j d_i + q d_i d_j. \end{align}
If q = 0, finding a translation that makes $Q'_{ij} = 0$ is now a (overspecified) linear problem of six equations in three unknowns. To make it even simpler, assume the dipole is aligned along the z-axis ($p_1 = p_2 = 0$). Then it is easy to see that no translation will change $Q_{11}, Q_{12}, \text{ or } Q_{22}$, so if any of them are nonzero then translating the distribution will not make them zero. If they are all zero, then the quadrupole moment can be entirely zeroed out by setting $d_1 = Q_{13}/p_3$, $d_2 = Q_{23}/p_3$, and $d_3 = Q_{33}/p_3$.