Why 1//0.01 == 99 in Python?
If this were division with real numbers, 1//0.01
would be exactly 100. Since they are floating-point approximations, though, 0.01
is slightly larger than 1/100, meaning the quotient is slightly smaller than 100. It's this 99.something value that is then floored to 99.
What you have to take into account is that //
is the floor
operator and as such you should first think as if you have equal probability to fall in 100 as in 99 (*) (because the operation will be 100 ± epsilon
with epsilon>0
provided that the chances of getting exactly 100.00..0 are extremely low.)
You can actually see the same with a minus sign,
>>> 1//.01
99.0
>>> -1//.01
-100.0
and you should be as (un)surprised.
On the other hand, int(-1/.01)
performs first the division and then applies the int()
in the number, which is not floor but a truncation towards 0! meaning that in that case,
>>> 1/.01
100.0
>>> -1/.01
-100.0
hence,
>>> int(1/.01)
100
>>> int(-1/.01)
-100
Rounding though, would give you the YOUR expected result for this operator because again, the error is small for those figures.
(*)I am not saying that the probability is the same, I am just saying that a priori when you perform such a computation with floating arithmetic that is an estimate of what you are getting.
The reasons for this outcome are like you state, and are explained in Is floating point math broken? and many other similar Q&A.
When you know the number of decimals of numerator and denominator, a more reliable way is to multiply those numbers first so they can treated as integers, and then perform integer division on them:
So in your case 1//0.01
should be converted first to 1*100//(0.01*100)
which is 100.
In more extreme cases you can still get "unexpected" results. It might be necessary to add a round
call to numerator and denominator before performing the integer division:
1 * 100000000000 // round(0.00000000001 * 100000000000)
But, if this is about working with fixed decimals (money, cents), then consider working with cents as unit, so that all arithmetic can be done as integer arithmetic, and only convert to/from the main monetary unit (dollar) when doing I/O.
Or alternatively, use a library for decimals, like decimal, which:
...provides support for fast correctly-rounded decimal floating point arithmetic.
from decimal import Decimal
cent = Decimal(1) / Decimal(100) # Contrary to floating point, this is exactly 0.01
print (Decimal(1) // cent) # 100