Get number of elements greater than a number

Here's a quick way using Policy-Based Data Structures in C++:

There exists something called as an Ordered Set, which lets you insert/remove elements in O(logN) time (and pretty much all other functions that std::set has to offer). It also gives 2 more features: Find the Kth element and **find the rank of the Xth element. The problem is that this doesn't allow duplicates :(

No Worries though! We will map duplicates with a separate index/priority, and define a new structure (call it Ordered Multiset)! I've attached my implementation below for reference.

Finally, every time you want to find the no of elements greater than say x, call the function upper_bound (No of elements less than or equal to x) and subtract this number from the size of your Ordered Multiset!

Note: PBDS use a lot of memory, so that is a constraint, I'd suggest using a Binary Search Tree or a Fenwick Tree.

#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
using namespace std;
using namespace __gnu_pbds;

struct ordered_multiset { // multiset supporting duplicating values in set
    int len = 0;
    const int ADD = 1000010;
    const int MAXVAL = 1000000010;
    unordered_map<int, int> mp; // hash = 96814
    tree<int, null_type, less<int>, rb_tree_tag, tree_order_statistics_node_update> T;

    ordered_multiset() { len = 0; T.clear(), mp.clear(); }

    inline void insert(int x){
        len++, x += MAXVAL;
        int c = mp[x]++;
        T.insert((x * ADD) + c); }

    inline void erase(int x){
        x += MAXVAL;
        int c = mp[x];
        if(c) {
            c--, mp[x]--, len--;
            T.erase((x*ADD) + c); } }

    inline int kth(int k){        // 1-based index,  returns the
        if(k<1 || k>len) return -1;     // K'th element in the treap,
        auto it = T.find_by_order(--k); // -1 if none exists
        return ((*it)/ADD) - MAXVAL; } 

    inline int lower_bound(int x){      // Count of value <x in treap
        x += MAXVAL;
        int c = mp[x];
        return (T.order_of_key((x*ADD)+c)); }

    inline int upper_bound(int x){      // Count of value <=x in treap
        x += MAXVAL;
        int c = mp[x];
        return (T.order_of_key((x*ADD)+c)); }

    inline int size() { return len; }   // Number of elements in treap
};

Usage:

    ordered_multiset s;
    for(int i=0; i<n; i++) {
        int x; cin>>x;
        s.insert(x);
        int ctr = s.size() - s.upper_bound(x);
        cout<<ctr<<" ";
    }

Input (n = 6) : 10 1 3 3 2
Output : 0 1 1 1 3

Time Complexity : O(log n) per query/insert

References : mochow13's GitHub


Great question. I do not think there is anything in STL which would suit your needs (provided you MUST have logarithmic times). I think the best solution then, as aschepler says in comments, is to implement a RB tree. You may have a look at STL source code, particularly on stl_tree.h to see whether you could use bits of it.

Better still, look at : (Rank Tree in C++)

Which contains link to implementation:

(http://code.google.com/p/options/downloads/list)


You should use a multiset for logarithmic complexity, yes. But computing the distance is the problem, as set/map iterators are Bidirectional, not RandomAccess, std::distance has an O(n) complexity on them:

multiset<int> my_set;
...
auto it = my_map.lower_bound(3);
size_t count_inserted = distance(it, my_set.end()) // this is definitely O(n)
my_map.insert(make_pair(3);

Your complexity-issue is complicated. Here is a full analysis:

If you want a O(log(n)) complexity for each insertion, you need a sorted structure as a set. If you want the structure to not reallocate or move items when adding a new item, the insertion point distance computation will be O(n). If know the insertion size in advance, you do not need logarithmic insertion time in a sorted container. You can insert all the items then sort, it is as much O(n.log(n)) as n * O(log(n)) insertions in a set. The only alternative is to use a dedicated container like a weighted RB-tree. Depending on your problem this may be the solution, or something really overkill.

  • Use multiset and distance, you are O(n.log(n)) on insertion (yes, n insertions * log(n) insertion time for each one of them), O(n.n) on distance computation, but computing distances is very fast.
  • If you know the inserted data size (n) in advance : Use a vector, fill it, sort it, return your distances, you are O(n.log(n)), and it is easy to code.
  • If you do not know n in advance, your n is likely huge, each item is memory-heavy so you can not have O(n.log(n)) reallocation : then you have time to re-encode or re-use some non-standard code, you really have to meet these complexity expectations, use a dedicated container. Also consider using a database, you will probably have issues maintaining this in memory.