Can random.uniform(0,1) ever generate 0 or 1?
uniform(0, 1)
can produce 0
, but it'll never produce 1
.
The documentation tells you that the endpoint b
could be included in the values produced:
The end-point value
b
may or may not be included in the range depending on floating-point rounding in the equationa + (b-a) * random()
.
So for uniform(0, 1)
, the formula 0 + (1-0) * random()
, simplified to 1 * random()
, would have to be capable of producing 1
exactly. That would only happen if random.random()
is 1.0
exactly. However, random()
never produces 1.0
.
Quoting the random.random()
documentation:
Return the next random floating point number in the range [0.0, 1.0).
The notation [..., ...)
means that the first value is part of all possible values, but the second one is not. random.random()
will at most produce values very close to 1.0
. Python's float
type is a IEEE 754 base64 floating point value, which encodes a number of binary fractions (1/2, 1/4, 1/5, etc.) that make up the value, and the value random.random()
produces is simply the sum of a random selection of those 53 such fractions from 2 ** -1
(1/2) through to 2 ** -53
(1/9007199254740992).
However, because it can produce values very close to 1.0
, together with rounding errors that occur when you multiply floating point nubmers, you can produce b
for some values of a
and b
. But 0
and 1
are not among those values.
Note that random.random()
can produce 0.0, so a
is always included in the possible values for random.uniform()
(a + (b - a) * 0 == a
). Because there are 2 ** 53
different values that random.random()
can produce (all possible combinations of those 53 binary fractions), there is only a 1 in 2 ** 53
(so 1 in 9007199254740992) chance of that ever happening.
So the highest possible value that random.random()
can produce is 1 - (2 ** -53)
; simply pick a small enough value for b - a
to allow for rounding to kick in when multiplied by higher random.random()
values. The smaller b - a
is, the greater the chances of that happening:
>>> import random, sys
>>> def find_b():
... a, b = 0, sys.float_info.epsilon
... while random.uniform(a, b) != b:
... b /= 2
... else:
... return b
...
>>> print("uniform(0, {0}) == {0}".format(find_b()))
...
uniform(0, 4e-323) == 4e-323
If you hit b = 0.0
, then we've divided 1023 times, the above value means we got lucky after 1019 divisions. The highest value I found so far (running the above function in a loop with max()
) is 8.095e-320
(1008 divisions), but there are probably higher values. It's all a game of chance. :-)
It can also happen if there are not many discrete steps between a
and b
, like when a
and b
have a high exponent and so may appear to be far appart. Floating point values are still only approximations, and the number of values they can encode is finite. For example, there is only 1 binary fraction of difference between sys.float_info.max
and sys.float_info.max - (2 ** 970)
, so there is a 50-50 chance random.uniform(sys.float_info.max - (2 ** 970), sys.float_info.max)
produces sys.float_info.max
:
>>> a, b = sys.float_info.max - (2 ** 970), sys.float_info.max
>>> values = [random.uniform(a, b) for _ in range(10000)]
>>> values.count(sys.float_info.max) # should be roughly 5000
4997
"Several times" isn't enough. 10,000 isn't enough. random.uniform
chooses from among 2^53 (9,007,199,254,740,992) different values. You're interested in two of them. As such, you should expect to generate several quadrillion random values before getting a value that is exactly 0 or 1. So it's possible, but it's very very likely that you will never observe it.
Sure. You were already on the right track with trying uniform(0, 0.001)
instead. Just keep restricting the bounds enough to make it happen sooner.
>>> random.uniform(0., 5e-324)
5e-324
>>> random.uniform(0., 5e-324)
5e-324
>>> random.uniform(0., 5e-324)
0.0