Algorithm to apply permutation in constant memory space

There is a trivial O(n^2) algorithm, but you can do this in O(n). E.g.:

A = [a, b, c, d, e]

P = [4, 3, 2, 0, 1]

We can swap each element in A with the right element required by P, after each swap, there will be one more element in the right position, and do this in a circular fashion for each of the positions (swap elements pointed with ^s):

[a, b, c, d, e] <- P[0] = 4 != 0 (where a initially was), swap 0 (where a is) with 4
 ^           ^
[e, b, c, d, a] <- P[4] = 1 != 0 (where a initially was), swap 4 (where a is) with 1
    ^        ^
[e, a, c, d, b] <- P[1] = 3 != 0 (where a initially was), swap 1 (where a is) with 3
    ^     ^
[e, d, c, a, b] <- P[3] = 0 == 0 (where a initially was), finish step

After one circle, we find the next element in the array that does not stay in the right position, and do this again. So in the end you will get the result you want, and since each position is touched a constant time (for each position, at most one operation (swap) is performed), it is O(n) time.

You can stored the information of which one is in its right place by:

  1. set the corresponding entry in P to -1, which is unrecoverable: after the operations above, P will become [-1, -1, 2, -1, -1], which denotes that only the second one might be not in the right position, and a further step will make sure it is in the right position and terminates the algorithm;

  2. set the corresponding entry in P to -n - 1: P becomes [-5, -4, 2, -1, -2], which can be recovered in O(n) trivially.


Yet another unnecessary answer! This one preserves the permutation array P explicitly, which was necessary for my situation, but sacrifices in cost. Also this does not require tracking the correctly placed elements. I understand that a previous answer provides the O(N) solution, so I guess this one is just for amusement!

We get best case complexity O(N), worst case O(N^2), and average case O(NlogN). For large arrays (N~10000 or greater), the average case is essentially O(N).

Here is the core algorithm in Java (I mean pseudo-code *cough cough*)

int ind=0;
float temp=0;

for(int i=0; i<(n-1); i++){
  // get next index
  ind = P[i];
  while(ind<i)
    ind = P[ind];

  // swap elements in array
  temp = A[i];
  A[i] = A[ind];
  A[ind] = temp;
}

Here is an example of the algorithm running (similar to previous answers):

let A = [a, b, c, d, e]

and P = [2, 4, 3, 0, 1]

then expected = [c, e, d, a, b]

i=0:  [a, b, c, d, e] // (ind=P[0]=2)>=0 no while loop, swap A[0]<->A[2]
       ^     ^
i=1:  [c, b, a, d, e] // (ind=P[1]=4)>=1 no while loop, swap A[1]<->A[4]
          ^        ^
i=2:  [c, e, a, d, b] // (ind=P[2]=3)>=2 no while loop, swap A[2]<->A[3]
             ^  ^
i=3a: [c, e, d, a, b] // (ind=P[3]=0)<3 uh-oh! enter while loop...
                ^
i=3b: [c, e, d, a, b] // loop iteration: ind<-P[0]. now have (ind=2)<3
       ?        ^
i=3c: [c, e, d, a, b] // loop iteration: ind<-P[2]. now have (ind=3)>=3
             ?  ^
i=3d: [c, e, d, a, b] // good index found. Swap A[3]<->A[3]
                ^
done.

This algorithm can bounce around in that while loop for any indices j<i, up to at most i times during the ith iteration. In the worst case (I think!) each iteration of the outer for loop would result in i extra assignments from the while loop, so we'd have an arithmetic series thing going on, which would add an N^2 factor to the complexity! Running this for a range of N and averaging the number of 'extra' assignments needed by the while loop (averaged over many permutations for each N, that is), though, strongly suggests to me that the average case is O(NlogN).

Thanks!