What is the fastest/most efficient algorithm for estimating Euler's Constant $\gamma$?
The paper "On the computation of the Euler constant $\gamma$" by Ekatharine A. Karatsuba, in Numerical Algorithms 24(2000) 83-97, has a lot to say about this. This link might work for you.
In particular, the author shows that for $k\ge 1$, $$ \gamma= 1-\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} + \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}+\mbox{O}(2^{-k})$$
and more explicitly $$\begin{align*} -\frac{2}{(12k)!} - 2k^2 e^{-k} \le \gamma -1+&\log k \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1}}{(r-1)!(r+1)} - \sum_{r=1}^{12k+1} \frac{ (-1)^{r-1} k^{r+1} }{(r-1)! (r+1)^2}\\ &\le \frac{2}{(12k)!} + 2k^2 e^{-k}\end{align*}$$ for $k\ge 1$.
Since the series has fast convergence, you can use these to get good approximations to $\gamma$ fairly quickly.
I like $$ \gamma = \lim_{n \rightarrow \infty} \; \; \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$ because it needs no logarithm and the error is comparable to the final term used.
n sum error n^2 * error
1 0.5 0.07721566490153287 0.07721566490153287
10 0.5757019096925315 0.001513755209001322 0.1513755209001322
100 0.5771991634147917 1.650148674114948e-05 0.1650148674114948
1000 0.5772154984013406 1.665001923001341e-07 0.1665001923001341
10000 0.5772156632363485 1.665184323762503e-09 0.1665184323762503
I found this formula on page 82, the January 2012 issue (volume 119, number 1) of the M. A. A. American Mathematical Monthly. It was sent in by someone named Jouzas Juvencijus Macys, possibly for the Problems and Solutions section. He stopped the sum at $-1/n^2.$ I noticed that the error would be minimized by continuing the sum to $-1/(n^2 + n).$ If you want, you can add a single term $1/(6 n^2)$ to get the error down to $n^{-3}.$
$$ \gamma = \lim_{n \rightarrow \infty} \; \; \frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \frac{1}{n^2 + 1 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$
n sum error
1 0.6666666666666666 -0.08945100176513376
10 0.5773685763591982 -0.0001529114576653834
100 0.5772158300814584 -1.651799255153463e-07
1000 0.5772156650680073 -1.664743898288634e-10
10000 0.5772156649030152 -1.482369782479509e-12
EDIT, December 2013. I just got a nice note, with English preprint, from Prof. Macys. The original article is in Lithuanian in 2008. A Russian version and matching English translation are both 2013: the Springer website is not quite up to Volume 94, number 5, pages 45-50. The English language journal is called Mathematical Notes. Oh, the title is "On the Euler-Mascheroni constant."
If desired, you can put two correction terms to get the error down to $n^{-4}.$
$$ \gamma = \lim_{n \rightarrow \infty} \; \; \frac{-1}{6n^3} +\frac{1}{6n^2} + \left( \; \; 1 + \frac{1}{2} + \cdots + \frac{1}{n} - \frac{1}{n + 1 } - \cdots - \frac{1}{n^2 } - \cdots - \frac{1}{n^2 + n} \; \; \right) $$
10 0.5772019096925316 1.375520900126492e-05
100 0.5772156634147917 1.486741174616668e-09
600 0.5772156649003506 1.182276498923329e-12
A good place for fast evaluation of constants is Gourdon and Sebah's 'Numbers, constants and computation'.
They got $108\cdot 10^6$ digits for $\gamma$ in 1999 (see the end of their 2004 article 'The Euler constant') and propose a free program for high precision evaluation of various constants 'PiFast'.
On his page of constants Simon Plouffe has Euler's constants to 10^6 digits (the file looks much smaller sorry...) using Brent's splitting algorithm (see the 1980 paper of Brent 'Some new algorithms for high-precision computation of Euler’s constant' or more recently 3.1 in Haible and Papanikolaou's 'Fast multiprecision evaluation of series of rational numbers').
It seems that the 1999 record was broken in 2009 by A. Yee & R. Chan with 29,844,489,545 digits 'Mathematical Constants - Billions of Digits' (warning: the torrent file proposed there is more than 11Gb large! An earlier 52Mb file of 'only' 116 million digits is available here using the method proposed by Gourdon and Sebah).