Would you use num%2 or num&1 to check if a number is even?
I code for readability first so my choice here is num % 2 == 0
. This is far more clear than num & 1 == 0
. I'll let the compiler worry about optimizing for me and only adjust if profiling shows this to be a bottleneck. Anything else is premature.
I consider the second option to be far more elegant and meaningful
I strongly disagree with this. A number is even because its congruency modulo two is zero, not because its binary representation ends with a certain bit. Binary representations are an implementation detail. Relying on implementation details is generally a code smell. As others have pointed out, testing the LSB fails on machines that use ones' complement representations.
Another point is that the second one might be a little harder to comprehend for less experienced programmers. On that I'd answer that it will probably only benefit everybody if these programmers take that short time to understand statements of this kind.
I disagree. We should all be coding to make our intent clearer. If we are testing for evenness the code should express that (and a comment should be unnecessary). Again, testing congruency modulo two more clearly expresses the intent of the code than checking the LSB.
And, more importantly, the details should be hidden away in an isEven
method. So we should see if(isEven(someNumber)) { // details }
and only see num % 2 == 0
once in the definition of isEven
.
If you're going to say that some compilers won't optimise %2
, then you should also note that some compilers use a ones' complement representation for signed integers. In that representation, &1
gives the wrong answer for negative numbers.
So what do you want - code which is slow on "some compilers", or code which is wrong on "some compilers"? Not necessarily the same compilers in each case, but both kinds are extremely rare.
Of course if num
is of an unsigned type, or one of the C99 fixed-width integer types (int8_t
and so on, which are required to be 2's complement), then this isn't an issue. In that case, I consider %2
to be more elegant and meaningful, and &1
to be a hack that might conceivably be necessary sometimes for performance. I think for example that CPython doesn't do this optimisation, and the same will be true of fully interpreted languages (although then the parsing overhead likely dwarfs the difference between the two machine instructions). I'd be a bit surprised to come across a C or C++ compiler that didn't do it where possible, though, because it's a no-brainer at the point of emitting instructions if not before.
In general, I would say that in C++ you are completely at the mercy of the compiler's ability to optimise. Standard containers and algorithms have n levels of indirection, most of which disappears when the compiler has finished inlining and optimising. A decent C++ compiler can handle arithmetic with constant values before breakfast, and a non-decent C++ compiler will produce rubbish code no matter what you do.
I define and use an "IsEven" function so I don't have to think about it, then I chose one method or the other and forget how I check if something is even.
Only nitpick/caveat is I'd just say that with the bitwise operation, you're assuming something about the representation of the numbers in binary, with modulo you are not. You are interpreting the number as a decimal value. This is pretty much guaranteed to work with integers. However consider that the modulo would work for a double, however the bitwise operation would not.