Select random element from a set, faster than linear time (Haskell)

As far as I know, the proper solution would be to use an indexed set -- i.e. an IntMap. You just need to store the total number of elements added along with the map. Every time you add an element, you add it with a key one higher than previously. Deleting an element is fine -- just don't alter the total elements counter. If, on looking up a keyed element, that element no longer exists, then generate a new random number and try again. This works until the total number of deletions dominates the number of active elements in the set. If that's a problem, you can keep a separate set of deleted keys to draw from when inserting new elements.


Data.Map has an indexing function (elemAt), so use this:

import qualified Data.Map as M
import Data.Map(member, size, empty)
import System.Random

type Set a = M.Map a ()

insert :: (Ord a) => a -> Set a -> Set a
insert a = M.insert a ()

fromList :: Ord a => [a] -> Set a
fromList = M.fromList . flip zip (repeat ())

elemAt i = fst . M.elemAt i

randElem :: (RandomGen g) => Set a -> g -> (a, g)
randElem s g = (elemAt n s, g')
    where (n, g') = randomR (0, size s - 1) g

And you have something quite compatible with Data.Set (with respect to interface and performance) that also has a log(n) indexing function and the randElem function you requested.

Note that randElem is log(n) (and it's probably the fastest implementation you can get with this complexity), and all the other functions have the same complexity as in Data.Set. Let me know if you need any other specific functions from the Set API and I will add them.


Here's an idea: You could do interval bisection.

  1. size s is constant time. Use randomR to get how far into the set you are selecting.
  2. Do split with various values between the original findMin and findMax until you get the element at the position you want. If you really fear that the set is made up say of reals and is extremely tightly clustered, you can recompute findMin and findMax each time to guarantee knocking off some elements each time.

The performance would be O(n log n), basically no worse than your current solution, but with only rather weak conditions to the effect that the set not be entirely clustered round some accumulation point, the average performance should be ~((logn)^2), which is fairly constant. If it's a set of integers, you get O(log n * log m), where m is the initial range of the set; it's only reals that could cause really nasty performance in an interval bisection (or other data types whose order-type has accumulation points).

PS. This produces a perfectly even distribution, as long as watching for off-by-ones to make sure it's possible to get the elements at the top and bottom.

Edit: added 'code'

Some inelegant, unchecked (pseudo?) code. No compiler on my current machine to smoke test, possibility of off-by-ones, and could probably be done with fewer ifs. One thing: check out how mid is generated; it'll need some tweaking depending on whether you are looking for something that works with sets of ints or reals (interval bisection is inherently topological, and oughtn't to work quite the same for sets with different topologies).

import Data.Set as Set
import System.Random (getStdGen, randomR, RandomGen)

getNth (s, n) = if n = 0 then (Set.findMin s) else if n + 1 = Set.size s then Set.findMax s
    else if n < Set.size bott then getNth (bott, n) else if pres and Set.size bott = n then n
    else if pres then getNth (top, n - Set.size bott - 1) else getNth (top, n - Set.size)
    where mid = ((Set.findMax s) - (Set.findMin s)) /2 + (Set.findMin s)
          (bott, pres, top) = (splitMember mid s)

randElem s g = (getNth(s, n), g')
    where (n, g') = randomR (0, Set.size s - 1) g