Which sorting algorithm works best on very large data set

There's no one algorithm that's clearly the "best" algorithm. If there were, we'd be using it everywhere! Instead, it depends on a bunch of factors.

For starters, can you fit your data into main memory? If you can't, then you'd need to rely on an external sorting algorithm. These algorithms are often based on quicksort and mergesort.

Second, do you know anything about your input distribution? If it's mostly sorted, then something like Timsort might be a great option, since it's designed to work well on sorted data. If it's mostly random, Timsort is probably not a good choice.

Third, what kind of elements are you sorting? If you are sorting generic objects, then you're pretty much locked into comparison sorting. If not, perhaps you could use a non-comparison sort like counting sort or radix sort.

Fourth, how many cores do you have? Some sorting algorithms (quicksort, mergesort, MSD radix sort) parallelize really well, while others do not (heapsort).

Fifth, how are your data represented? If they're stored in an array, quicksort or a quicksort variant will likely do well because of locality of reference, while mergesort might be slow due to the extra memory needed. If they're in a linked list, though, the locality of reference from quicksort goes away and mergesort suddenly becomes competitive again.

The best option is probably to take a lot of different factors into account and then make a decision from there. One of the reason it's so fun to design and study algorithms is that there's rarely one single best choice; often, the best option depends a ton on your particular situation and changes based on what you're seeing.

(You mentioned a few details about quicksort, heapsort, and mergesort that I wanted to touch on before wrapping up this answer. While you're right that quicksort has a degenerate O(n2) worst case, there are many ways to avoid this. The introsort algorithm keeps track of the recursion depth and switches the algorithm to heapsort if it looks like the quicksort will degenerate. This guarantees O(n log n) worst-case behavior with low memory overhead and maximizes the amount of benefit you get from quicksort. Randomized quicksort, while still having an O(n2) worst case, has a vanishingly small probability of actually hitting that worst case.

Heapsort is a good algorithm in practice, but isn't as fast as the other algorithms in some cases because it doesn't have good locality of reference. That said, the fact that it never degenerates and needs only O(1) auxiliary space is a huge selling point.

Mergesort does need a lot of auxiliary memory, which is one reason why you might not want to use it if you have a huge amount of data to sort. It's worth knowing about, though, since its variants are widely used.)


Your question is too open-ended to be answered specifically. There are a number of efficient sorting algorithms and each has its own strengths and weaknesses. If you know your data, it is possible that an optimal efficiency algorithm (heap, quick, merge, etc) is not the right tool for the job.

For example, in a recent product, we were required to keep the bookmarks in a Word document sorted by their order of appearance. The bookmarks could become unsorted due to editing of the document (copy, cut, paste) so after each of those operations it was important to resort the list. In this case, bubblesort was the right answer even though it has a higher big-O complexity then any number of other algorithms. The fact that the sort is efficient when the list is nearly sorted (which is usually the case in this circumstance) and it's an in-place operation meant that it was the right tool for the job.

Take a hard look at your data and read up on the various strengths and weaknesses of the well-known sorting algorithms and you'll be well on your way to answering your own question.