Weighted random selection with and without replacement
Here's what I came up with for weighted selection without replacement:
def WeightedSelectionWithoutReplacement(l, n):
"""Selects without replacement n random elements from a list of (weight, item) tuples."""
l = sorted((random.random() * x[0], x[1]) for x in l)
return l[-n:]
This is O(m log m) on the number of items in the list to be selected from. I'm fairly certain this will weight items correctly, though I haven't verified it in any formal sense.
Here's what I came up with for weighted selection with replacement:
def WeightedSelectionWithReplacement(l, n):
"""Selects with replacement n random elements from a list of (weight, item) tuples."""
cuml = []
total_weight = 0.0
for weight, item in l:
total_weight += weight
cuml.append((total_weight, item))
return [cuml[bisect.bisect(cuml, random.random()*total_weight)] for x in range(n)]
This is O(m + n log m), where m is the number of items in the input list, and n is the number of items to be selected.
A simple approach that hasn't been mentioned here is one proposed in Efraimidis and Spirakis. In python you could select m items from n >= m weighted items with strictly positive weights stored in weights, returning the selected indices, with:
import heapq
import math
import random
def WeightedSelectionWithoutReplacement(weights, m):
elt = [(math.log(random.random()) / weights[i], i) for i in range(len(weights))]
return [x[1] for x in heapq.nlargest(m, elt)]
This is very similar in structure to the first approach proposed by Nick Johnson. Unfortunately, that approach is biased in selecting the elements (see the comments on the method). Efraimidis and Spirakis proved that their approach is equivalent to random sampling without replacement in the linked paper.
One of the fastest ways to make many with replacement samples from an unchanging list is the alias method. The core intuition is that we can create a set of equal-sized bins for the weighted list that can be indexed very efficiently through bit operations, to avoid a binary search. It will turn out that, done correctly, we will need to only store two items from the original list per bin, and thus can represent the split with a single percentage.
Let's us take the example of five equally weighted choices, (a:1, b:1, c:1, d:1, e:1)
To create the alias lookup:
Normalize the weights such that they sum to
1.0
.(a:0.2 b:0.2 c:0.2 d:0.2 e:0.2)
This is the probability of choosing each weight.Find the smallest power of 2 greater than or equal to the number of variables, and create this number of partitions,
|p|
. Each partition represents a probability mass of1/|p|
. In this case, we create8
partitions, each able to contain0.125
.Take the variable with the least remaining weight, and place as much of it's mass as possible in an empty partition. In this example, we see that
a
fills the first partition.(p1{a|null,1.0},p2,p3,p4,p5,p6,p7,p8)
with(a:0.075, b:0.2 c:0.2 d:0.2 e:0.2)
If the partition is not filled, take the variable with the most weight, and fill the partition with that variable.
Repeat steps 3 and 4, until none of the weight from the original partition need be assigned to the list.
For example, if we run another iteration of 3 and 4, we see
(p1{a|null,1.0},p2{a|b,0.6},p3,p4,p5,p6,p7,p8)
with (a:0, b:0.15 c:0.2 d:0.2 e:0.2)
left to be assigned
At runtime:
Get a
U(0,1)
random number, say binary0.001100000
bitshift it
lg2(p)
, finding the index partition. Thus, we shift it by3
, yielding001.1
, or position 1, and thus partition 2.If the partition is split, use the decimal portion of the shifted random number to decide the split. In this case, the value is
0.5
, and0.5 < 0.6
, so returna
.
Here is some code and another explanation, but unfortunately it doesn't use the bitshifting technique, nor have I actually verified it.