How much bounty is out on Stackoverflow?

JavaScript - 176 133 130 108 106

function f()(t+=$("[title~=an]").text(),u=$("[rel*=x]")[0])?$("html").load(u.href,f):alert(eval(t));f(t=0)

Edit 1: trimmed some selectors down and used the ?: suggestion from Google's Closure Compiler (via @Sirko - thanks)

Edit 2: initialise s inside d and initialise t as 0 instead of ""

Edit 3: realised I don't actually need to target a specific container and can sweep the whole document, which gets rid of a bunch of .find calls and an unnecessary selector (plus the variable holding it)

Edit 4: shove the t initialiser in the function call to avoid a ; (it'll get hoisted to the top anyway) and squash the function down to one statement (combine two statements into one inside the ternary statement condition) to drop the {}

Note: I'm not sure if it's cheating, but this has to be run from a console window of a browser already pointing at http://stackoverflow.com/questions?page=1&sort=featured. It relies on the fact that jQuery and the appropriate paging links are available on the page itself. Also, it only appears to work in Firefox and not in IE or Chrome.

Output (at time of posting):

38150 (in an alert dialog)

Exploded/commented:

function f()
    //concat all the bounty labels to t (they take the format "+50")
    //happens to be elements with title attribute containing word 'an'
    (t+=$("[title~=an]").text(),
    //find the "next" (has rel=next attribute) button
    u = $("[rel*=x]")[0])       
        ?
        //if there is a next button, load it, and then recurse f again
        $("html").load(u.href,f)
        :
        //else eval the 0+a+b+...+z tally and alert the result
        alert(eval(t))
//kick off the initial scrape (and simultaneously init the total tally)
f(t=0)

Python - 232, 231, 195, 183, 176, 174

Parses the HTML from https://stackoverflow.com/questions?sort=featured using regular expressions.

The upper bound of range in the for loop must be number of pages + 1 or else the code will raise HTTPError because of 404s. Default number of results per-page is 15, which is what the code uses (omitting ?pagesize=50 saves on characters and is just as effective).

Thanks to @Gabe for the tip on reducing char count even further.

Golfed:

import requests,re;print sum(sum(map(int,re.findall(r"<.*>\+(\d+)<.*>",requests.get("https://stackoverflow.com/questions?sort=featured&page=%u"%i).text)))for i in range(1,33))

Output (at time of posting):

37700

Un-golfed:

Here's a somewhat un-golfed version that should be a bit easier to read and understand.

import requests, re

print sum(
          sum(
              map( int,
                   re.findall( r"<.*>\+(\d+)<.*>",
                               requests.get( "https://stackoverflow.com/questions?sort=featured&page=%u" % i).text
                   )
              )
          ) for i in range( 1, 33 )
      )

Rebol - 164 133 130 (139 with 404 check)

Parses the html using the parse sub-language of Rebol. Checks the first 98 pages. I realised I have the same constraint as the python solution - too many repeats hit 404 errors and stop the execution. Thanks to @rgchris for many improvements! Updated to check up to 98 pages.

s: 0 repeat n 99[parse read join http://stackoverflow.com/questions?sort=featured&page= n[15[thru{>+}copy x to{<}(s: s + do x)]]]s

With error checking for 404s (139):

s: 0 repeat n 99[attempt[parse read join http://stackoverflow.com/questions?sort=featured&page= n[15[thru{>+}copy x to{<}(s: s + do x)]]]]s

Test

>> s: 0 repeat n 20[parse read join http://stackoverflow.com/questions?sort=featured&page= n[15[thru{>+}copy x to{<}(s: s + do x)]]]s
== 23600

>> s: 0 repeat n 99[attempt[parse read join http://stackoverflow.com/questions?sort=featured&page= n[15[thru{>+}copy x to{<}(s: s + do x)]]]]s
Script: none Version: none Date: none
== 36050

Explanation

Rebol ignores whitespace, hence you can put it all on one line like that if you choose. PARSE takes two inputs, and the first argument (read join ...) is fairly self-explanatory. But here are some comments on the parse dialect instructions, in a more traditional indentation:

s: 0
repeat n 99 [
    parse read join http://stackoverflow.com/questions?sort=featured&page= n [
        ;-- match the enclosed pattern 15 times (the rule will fail politely when there are less entries)
        15 [
            ;-- seek the match position up THRU (and including) the string >+
            thru {>+}
            ;-- copy contents at the current position up TO (but not including) <
            copy x to {<}
            ;-- (Basically, run some non-dialected Rebol if this match point is reached) the do is a bit dangerous as it runs the string as code
            (s: s + do x)
        ]
    ]
]
;-- evaluator returns last value, we want the value in S
;-- (not the result of PARSE, that's a boolean on whether the end of input was reached)
s