Is light actually faster than what our present measurements tell us?
If we take air, then the refractive index at one atmosphere is around $1.0003$. So if we measure the speed of light in air we get a speed a factor of about $1.0003$ too slow i.e. a fractional error $\Delta c/c$ of $3 \times 10^{-4}$.
The difference of the refractive index from one, $n-1$, is proportional to the pressure. Let's write the pressure as a fraction of one atmosphere, i.e. the pressure divided by one atmosphere, then the fractional error in our measurement of $c$ is going to be about:
$$ \frac{\Delta c}{c} = 3 \times 10^{-4} \, P $$
In high vacuum labs we can, without too much effort, get to $10^{-10}$ torr and this is around $10^{-13}$ atmospheres or 10 nPa. So measuring the speed of light in this vacuum would give us an error:
$$ \frac{\Delta c}{c} \approx 3 \times 10^{-17} $$
And this is already smaller than the experimental errors in the measurement.
So while it is technically correct that we've never measured the speed of light in a perfect vacuum, the vacuum we can generate is sufficiently good that its effect on the measurement is entirely negligible.
The answer by John Rennie is good so far as the impacts of the imperfect vacuum go, so I won't repeat that here.
As regards the last part of your question about whether this should be accounted when measuring distance, it's worth noting that the standard defines the speed of light to be a specific value and then, using also the definition of the second, derives the meter as a matter of measurement. So as the standards are currently written, the speed of light is exact by definition.
Your question, as written, implicitly assumes that the meter and the second are given by definition and the speed of light a question of measurement.
So from that perspective, your question really should be written to ask whether the impact of imperfect vacuum impacts our definition of the meter. The answer to that, is that it probably does, as was approximately quantified by John Rennie. Whether or not it is important depends on what method is used and what other experimental uncertainties are inherent to that method.
There is a constant in physics called $c$ that is the "exchange rate" between space and time. One second in time is in some sense "equivalent" to $c$ times one second (which then gives a distance in space). Light is taken to travel at $c$. Note that $c$ isn't the speed of light, but rather the speed of light is $c$, which is a subtle distinction ($c$ being what it is causes light to travel at that speed, rather than light traveling at that speed causes $c$ to be that value). $c$ has been measured by looking at how fast light travels, but there are also several other ways of finding $c$. For instance, $c^2$ is equal to the reciprocal of the product of the vacuum permittivity and the vacuum permeability. So not only is the effect of imperfect vacuums negligible in measuring $c$ by looking at the speed of light, but there are multiple other observables that depend on it.