why binary is read right to left
Just as a counterpoint, there is a nice left-to-right method for reading binary numbers: start at the left, and then each time you move rightward, you double your previous total and add the current digit.
Example: $110010_2$:
$1$
$2\cdot 1+1=3$
$2\cdot 3+0=6$
$2\cdot 6+0=12$
$2\cdot 12+1=25$
$2\cdot 25+0=50$.
I have found (and my students too), that with practice, this method is quicker than the right-to-left method.
Edit based on a request for further explanation:
This method works in any base (and it is also the same idea as Horner's method for evaluating a polynomial). For example in base ten, if I started reading you digits of a number from left-to-right, say 3, 7, 9, 2, you could process this digit-by-digit with a provisional total at each step: 3, 37, 379, 3792; where at each step you multiply the previous result by ten (the base) and add the next digit.
In the example in my post (multiplying by two at each step), we get $$((((1\cdot 2+1)\cdot 2+0)\cdot 2 + 0)\cdot 2+1)\cdot 2+0=1\cdot 2^5 + 1\cdot 2^4 + 0 \cdot 2^3 + 0\cdot 2^2 + 1\cdot 2+ 0$$ which is just the base-two expanded form of the numeral.
The proper question is not, "Why is binary read right to left?" The question that should be asked is, "How do people usually read binary numbers?"
The answer at http://wiki.answers.com/Q/Why_do_you_read_binary_digits_right_to_left suffers because of the way the question was phrased. (The exact question there was, "Why do you read binary digits right to left?") The correct answer (which I think is what the wiki answer was trying to say) is that binary numbers are used in the same way as decimal numbers, except that (a) each digit position is valued only $2$ times the position to its right, not $10$ times, and (b) the only digits allowed are $0$ and $1$.
In other words, we normally read binary numbers left to right, just as we do with decimal numbers, not right to left.
On the other hand, commonly taught algorithms for adding or multiplying decimal numbers by hand are performed starting at the rightmost digit of each number. You can adapt those same algorithms to addition or multiplication of binary numbers.
There is a related question, which is, "In what order does a computer store the binary digits of a binary number?" The answer to that question depends on which computer is storing the number.
A number of useful functions on the integers (or tuples of integers) have this property: to compute the last $N$ digits of the result you only need the last $f(N)$ digits of the input where $f$ is some reasonably slow growing function of $N$. This means it makes sense to evaluate your function on the integers by working from the least significant digit first and working your way towards more significant digits. Depending on how you organise memory, this might make it more convenient to think of your data as stored in the reverse order to the way decimal numbers are written.
For example, addition and multiplication have this property. (This also means that evaluating polynomials in integers also has this property.) Division by a fixed power of two also has this property.
(To set things in a larger mathematical context, many computationally useful functions on the integers are continuous in the 2-adic sense. If you store the binary digits of a number starting with the least significant first, and then read through the digits of an integer starting at the least significant, but stop before you get to the end, the digits you've read so far still give a good approximation to the number in the 2-adic sense. This makes it natural to start with the least significant digits.)