Frugal conversion of uniformly distributed random numbers from one range to another
Your goal is ultimately to roll a k-sided die given only a p-sided die, without wasting randomness.
In this sense, by Lemma 3 in "Simulating a dice with a dice" by B. Kloeckner, this waste is inevitable unless "every prime number dividing k also divides p". Thus, for example, if p is a power of 2 (and any block of random bits is the same as rolling a die with a power of 2 number of faces) and k has prime factors other than 2, the best you can do is get arbitrarily close to no waste of randomness.
Also, besides batching of bits to reduce "bit waste" (see also the Math Forum), there is also the technique of randomness extraction, discussed in Devroye and Gravel 2015-2020 and in my Note on Randomness Extraction.
See also the question: How to generate a random integer in the range [0,n] from a stream of random bits without wasting bits?, especially my answer there.
Keep adding more digits. Here's some Python to compute expected yields (this is slightly worse for a particular value of n
than your approach because it doesn't save leftover bits, but it's good enough to make my point):
import math
def expected_digits(n, b):
total = 0
p = 1
while n >= b:
p *= 1 - (n % b) / n
total += p
n //= b
return total
def expected_yield(k):
return expected_digits(2 ** k, 10) / k
print(expected_yield(10))
print(expected_yield(30))
print(expected_yield(100000))
print(math.log10(2))
The output is
0.294921875
0.2952809327592452
0.301018918814536
0.3010299956639812
and as you can see, 100000
binary digits (second to last line) gets quite close to the Shannon limit (last line).
In theoretical terms, we're applying an arithmetic decoder where all output numbers have equal probability to an infinite stream of bits (interpreted as a random number between 0 and 1). The asymptotic efficiency approaches perfection, but the more samples you take, the heavier the arithmetic gets. That tends to be the trade-off.