How does the Franceschini method work?
considering there are no other known O(n log n) stable in-place methods of sorting.
It's sufficiently easy to implement merge sort in-place with O(log n) additional space, which I guess is close enough in practice.
In fact there is a merge sort variant which is stable and uses only O(1) additional memory: "Practical in-place mergesort" by Katajainen, Pasanen and Teuhola. It has an optimal O(n log n) runtime, but it is not optimal because it uses Ω(n log n) element move operations, when it can be done with O(n) as demonstrated by the Franceschini paper.
It seems to run slower than a traditional merge sort, but not by a large margin. In contrast, the Franceschini version seems to be a lot more complicated and have a huge constant overhead.
Just a relevant note: it IS possible to turn any unstable sorting algorithm into a stable one, by simply holding the original array index alongside the key. When performing the comparison, if the keys are equal, the indices are compared instead.
Using such a technique would turn HeapSort, for example, into an in-place, worst-case O(n*logn), stable algorithm.
However, since we need to store O(1) of 'additional' data for every entry, we technically do need O(n) of extra space, so this isn't really in-place unless you consider the original index a part of the key. Franceschini's would not require to hold any additional data.