Is there any advantage of using map over unordered_map in case of trivial keys?
If you want to compare the speed of your std::map
and std::unordered_map
implementations, you could use Google's sparsehash project which has a time_hash_map program to time them. For example, with gcc 4.4.2 on an x86_64 Linux system
$ ./time_hash_map
TR1 UNORDERED_MAP (4 byte objects, 10000000 iterations):
map_grow 126.1 ns (27427396 hashes, 40000000 copies) 290.9 MB
map_predict/grow 67.4 ns (10000000 hashes, 40000000 copies) 232.8 MB
map_replace 22.3 ns (37427396 hashes, 40000000 copies)
map_fetch 16.3 ns (37427396 hashes, 40000000 copies)
map_fetch_empty 9.8 ns (10000000 hashes, 0 copies)
map_remove 49.1 ns (37427396 hashes, 40000000 copies)
map_toggle 86.1 ns (20000000 hashes, 40000000 copies)
STANDARD MAP (4 byte objects, 10000000 iterations):
map_grow 225.3 ns ( 0 hashes, 20000000 copies) 462.4 MB
map_predict/grow 225.1 ns ( 0 hashes, 20000000 copies) 462.6 MB
map_replace 151.2 ns ( 0 hashes, 20000000 copies)
map_fetch 156.0 ns ( 0 hashes, 20000000 copies)
map_fetch_empty 1.4 ns ( 0 hashes, 0 copies)
map_remove 141.0 ns ( 0 hashes, 20000000 copies)
map_toggle 67.3 ns ( 0 hashes, 20000000 copies)
Don't forget that map
keeps its elements ordered. If you can't give that up, obviously you can't use unordered_map
.
Something else to keep in mind is that unordered_map
generally uses more memory. map
just has a few house-keeping pointers, and memory for each object. Contrarily, unordered_map
has a big array (these can get quite big in some implementations), and then additional memory for each object. If you need to be memory-aware, map
should prove better, because it lacks the large array.
So, if you need pure lookup-retrieval, I'd say unordered_map
is the way to go. But there are always trade-offs, and if you can't afford them, then you can't use it.
Just from personal experience, I found an enormous improvement in performance (measured, of course) when using unordered_map
instead of map
in a main entity look-up table.
On the other hand, I found it was much slower at repeatedly inserting and removing elements. It's great for a relatively static collection of elements, but if you're doing tons of insertions and deletions the hashing + bucketing seems to add up. (Note, this was over many iterations.)