Sorting an array in C?

In your particular case the fastest sort is probably the one described in this answer. It is exactly optimized for an array of 6 ints and uses sorting networks. It is 20 times (measured on x86) faster than library qsort. Sorting networks are optimal for sort of fixed length arrays. As they are a fixed sequence of instructions they can even be implemented easily by hardware.

Generally speaking there is many sorting algorithms optimized for some specialized case. The general purpose algorithms like heap sort or quick sort are optimized for in place sorting of an array of items. They yield a complexity of O(n.log(n)), n being the number of items to sort.

The library function qsort() is very well coded and efficient in terms of complexity, but uses a call to some comparizon function provided by user, and this call has a quite high cost.

For sorting very large amount of datas algorithms have also to take care of swapping of data to and from disk, this is the kind of sorts implemented in databases and your best bet if you have such needs is to put datas in some database and use the built in sort.


In C, you can use the built in qsort command:

int compare( const void* a, const void* b)
{
     int int_a = * ( (int*) a );
     int int_b = * ( (int*) b );

     if ( int_a == int_b ) return 0;
     else if ( int_a < int_b ) return -1;
     else return 1;
}

qsort( a, 6, sizeof(int), compare )

see: http://www.cplusplus.com/reference/clibrary/cstdlib/qsort/


To answer the second part of your question: an optimal (comparison based) sorting algorithm is one that runs with O(n log(n)) comparisons. There are several that have this property (including quick sort, merge sort, heap sort, etc.), but which one to use depends on your use case.

As a side note, you can sometime do better than O(n log(n)) if you know something about your data - see the wikipedia article on Radix Sort


Depends

It depends on various things. But in general algorithms using a Divide-and-Conquer / dichotomic approach will perform well for sorting problems as they present interesting average-case complexities.

Basics

To understand which algorithms work best, you will need basic knowledge of algorithms complexity and big-O notation, so you can understand how they rate in terms of average case, best case and worst case scenarios. If required, you'd also have to pay attention to the sorting algorithm's stability.

For instance, usually an efficient algorithm is quicksort. However, if you give quicksort a perfectly inverted list, then it will perform poorly (a simple selection sort will perform better in that case!). Shell-sort would also usually be a good complement to quicksort if you perform a pre-analysis of your list.

Have a look at the following, for "advanced searches" using divide and conquer approaches:

  • quicksort
  • shellsort
  • mergesort

And these more straighforward algorithms for less complex ones:

  • bubblesort
  • selection sort
  • insert sort

Further

The above are the usual suspects when getting started, but there are countless others.

As pointed out by R. in the comments and by kriss in his answer, you may want to have a look at HeapSort, which provides a theoretically better sorting complexity than a quicksort (but will won't often fare better in practical settings). There are also variants and hybrid algorithms (e.g. TimSort).