Optimization of function taking a permutation
How about a Monte-Carlo-Metropolis search? I'll implement a simplistic version here. See complete universal code further down. Update: Cleaned-up code now available in the Wolfram Function Repository, so you can use ResourceFunction["MaximizeOverPermutations"]
instead of a locally-defined MaximizeOverPermutations
. NUG25 and NUG30 are given as applications in the documentation.
To move stochastically through permutation space, we need a random-move generator. Here I'll only use random two-permutations on M=100
list elements: given a list L
of 100 elements, generate a new list that has two random elements interchanged,
M = 100;
randomperm[L_] := Permute[L, Cycles[{RandomSample[Range[M], 2]}]]
With this randomperm
function we then travel stochastically through permutation-space using the Metropolis-Hastings algorithm. One step of this algorithm consists of proposing a step (with randomperm
) and accepting/rejecting it depending on how much the merit function f
increases/decreases:
f[samp_?ListQ] := f[samp] = (* merit function with memoization *)
Total@Total@Table[Table[(-1)^(i), {i, 1, Length[samp]}]*
Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}]
MH[L_, β_] := Module[{L1, f0, f1, fdiff, prob},
L1 = randomperm[L]; (* proposed new position *)
f0 = f[L]; (* merit function of old position *)
f1 = f[L1]; (* merit function of proposed new position *)
fdiff = N[f1 - f0]; (* probability of accepting the move *)
prob = If[fdiff > 0, 1, E^(β*fdiff)]; (* this is Metropolis-Hastings *)
(* make the move? with calculated probability *)
If[RandomReal[] <= prob, L1, L]]
The parameter β
is an effective temperature that nobody knows how to set.
Let's experiment: start with the uniform permutation Range[M]
and try with β=1
to see how high we can go with f
:
With[{β = 1, nstep = 30000},
Z = NestList[MH[#, β] &, Range[M], nstep];]
ZZ = {#, f[#]} & /@ Z;
ListPlot[ZZ[[All, 2]]]
After only $30\,000$ Metropolis-Hastings steps we have already found a permutation that gives $f=1766.64$:
MaximalBy[ZZ, N@*Last] // DeleteDuplicates
(* {{{69, 31, 91, 2, 47, 89, 75, 37, 96, 61, 40, 22, 64, 95, 81,
10, 66, 43, 19, 82, 85, 26, 28, 62, 78, 72, 34, 54, 45, 86,
57, 60, 65, 33, 13, 74, 5, 8, 11, 68, 77, 88, 23, 15, 35,
50, 83, 3, 93, 9, 18, 53, 63, 4, 58, 56, 30, 42, 46, 55, 36,
94, 1, 87, 51, 44, 14, 21, 97, 27, 52, 49, 99, 73, 39, 71,
7, 20, 41, 48, 24, 38, 29, 84, 6, 79, 90, 16, 59, 32, 12,
70, 98, 67, 92, 100, 76, 25, 17, 80},
184 + 154 Cos[1] - 157 Cos[2] - 252 Cos[3] - 194 Cos[4] +
69 Cos[5] + 238 Cos[6] + 190 Cos[7] + 8 Cos[8] - 154 Cos[9] -
120 Cos[10] + 17 Cos[11] + 94 Cos[12] + 134 Cos[13] + 19 Cos[14] -
81 Cos[15] - 76 Cos[16] + 14 Cos[17] + 23 Cos[18] + 36 Cos[19] +
4 Cos[20] - 35 Cos[21] - 21 Cos[22]}} *)
We can continue along this line with (i) increasing $\beta$, and (ii) introducing more moves, apart from randomperm
.
For example, we can raise $\beta$ slowly during the MH-Iteration, starting with $\beta_{\text{min}}$ and going up to $\beta_{\text{max}}$: this gives a simulated annealing advantage and tends to give higher results for f
.
With[{βmin = 10^-2, βmax = 10, nstep = 10^6},
With[{γ = N[(βmax/βmin)^(1/nstep)]},
Z = NestList[{MH[#[[1]], #[[2]]], γ*#[[2]]} &, {Range[M], βmin}, nstep];]]
ZZ = {#[[1]], #[[2]], f[#[[1]]]} & /@ Z;
ListLogLinearPlot[ZZ[[All, {2, 3}]]]
After playing around for a while, all f
-values computed so far are stored as DownValues
of f
and we can easily determine the absolutely largest f
-value seen so far: in my case, the largest value ever seen was $f=1805.05$,
MaximalBy[Cases[DownValues[f],
RuleDelayed[_[f[L_ /; VectorQ[L, NumericQ]]], g_] :> {L, g}],
N@*Last]
(* {{{93, 61, 1, 15, 7, 2, 51, 72, 92, 78, 59, 43, 58, 10, 63, 21, 13,
48, 76, 49, 99, 42, 35, 31, 11, 95, 69, 88, 82, 36, 57, 77, 97, 73,
47, 9, 28, 86, 24, 79, 6, 71, 39, 27, 83, 68, 40, 33, 98, 80, 75,
37, 91, 32, 19, 3, 56, 25, 84, 87, 41, 100, 52, 20, 64, 67, 34, 60,
14, 50, 70, 16, 46, 17, 90, 94, 5, 55, 23, 54, 45, 4, 85, 38, 65,
26, 18, 44, 29, 22, 81, 89, 66, 74, 96, 62, 30, 8, 12, 53},
170 + 174 Cos[1] - 150 Cos[2] - 282 Cos[3] - 172 Cos[4] +
120 Cos[5] + 218 Cos[6] + 191 Cos[7] - 13 Cos[8] - 214 Cos[9] -
141 Cos[10] + 22 Cos[11] + 117 Cos[12] + 109 Cos[13] +
27 Cos[14] - 60 Cos[15] - 52 Cos[16] + 6 Cos[17] + 23 Cos[18] +
43 Cos[19] - 8 Cos[20] - 29 Cos[21] - 19 Cos[22]}} *)
%[[All, 2]] // N
(* {1805.05} *)
Complete and universal code for permutational optimization
Here is a version of the above code that is more cleaned up and emits useful error messages:
(* error messages *)
MaximizeOverPermutations::Pstart = "Starting permutation `1` is invalid.";
MaximizeOverPermutations::f = "Optimization function does not yield a real number on `1`.";
(* interface for calculation at fixed β *)
MaximizeOverPermutations[f_, (* function to optimize *)
M_Integer /; M >= 2, (* number of arguments of f *)
β_?NumericQ, (* annealing parameter *)
steps_Integer?Positive, (* number of iteration steps *)
Pstart_: Automatic] := (* starting permutation *)
MaximizeOverPermutations[f, M, {β, β}, steps, Pstart]
(* interface for calculation with geometrically ramping β *)
MaximizeOverPermutations[f_, (* function to optimize *)
M_Integer /; M >= 2, (* number of arguments of f *)
{βstart_?NumericQ, (* annealing parameter at start *)
βend_?NumericQ}, (* annealing parameter at end *)
steps_Integer?Positive, (* number of iteration steps *)
Pstart_: Automatic] := (* starting permutation *)
Module[{P, g, Pmax, gmax, Pnew, gnew, β, γ, prob},
(* determine the starting permutation *)
P = Which[Pstart === Automatic, Range[M],
VectorQ[Pstart, IntegerQ] && Sort[Pstart] == Range[M], Pstart,
True, Message[MaximizeOverPermutations::Pstart, Pstart]; $Failed];
If[FailureQ[P], Return[$Failed]];
(* evaluate the function on the starting permutation *)
g = f[P] // N;
If[! Element[g, Reals], Message[MaximizeOverPermutations::f, P]; Return[$Failed]];
(* store maximum merit function *)
Pmax = P; gmax = g;
(* inverse temperature: geometric progression from βstart to βend *)
β = βstart // N;
γ = (βend/βstart)^(1/(steps - 1)) // N;
(* Metropolis-Hastings iteration *)
Do[
(* propose a new permutation by applying a random 2-cycle *)
Pnew = Permute[P, Cycles[{RandomSample[Range[M], 2]}]];
(* evaluate the function on the new permutation *)
gnew = f[Pnew] // N;
If[! Element[gnew, Reals],
Message[MaximizeOverPermutations::f, Pnew]; Return[$Failed]];
(* Metropolis-Hasting acceptance probability *)
prob = If[gnew > g, 1, Quiet[Exp[-β (g - gnew)], General::munfl]];
(* acceptance/rejection of the new permutation *)
If[RandomReal[] <= prob,
P = Pnew; g = gnew;
If[g > gmax, Pmax = P; gmax = g]];
(* update inverse temperature *)
β *= γ,
{steps}];
(* return maximum found *)
{Pmax, gmax}]
The OP's problem can be optimized with
f[samp_List] := Total[Table[(-1)^Range[Length[samp]]*Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}], 2]
MaximizeOverPermutations[f, 100, {1/100, 10}, 10^6]
A simpler problem, where we know the perfect optimum, is
SeedRandom[1234];
MM = 100;
x = RandomVariate[NormalDistribution[], MM];
Z[L_List] := L.x
The optimum is known: put the permutation in the same order as the numbers in the list x
. For this particular case of random numbers, we get
Z[Ordering[Ordering[x]]]
(* 2625.98 *)
A quick search yields something not quite as high,
MaximizeOverPermutations[Z, MM, 1, 10^4][[2]]
(* 2597.67 *)
To track the progress of the Monte-Carlo search, use a Sow
/Reap
combination:
zz = Reap[MaximizeOverPermutations[Sow@*Z, MM, 1, 10^4]];
ListPlot[zz[[2, 1]], GridLines -> {None, {zz[[1, 2]]}}]
zz = Reap[MaximizeOverPermutations[Sow@*Z, MM, {1/10, 10}, 10^5]];
ListPlot[zz[[2, 1]], GridLines -> {None, {zz[[1, 2]]}}]
Here is one approach from among the ones I allude to in a comment.
f[samp_?ListQ] :=
Total@Total@
Table[Table[(-1)^(i), {i, 1, Length[samp]}]*
Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}]
Now just define a function that takes a numeric vector, creates a permutation, and evaluates f
on it.
g[ll : {_?NumberQ ..}] := N[f[Ordering[ll]]]
We can get a reasonable value with NMaximize
. Restricting the range of the values seems to help here.
n = 100;
vars = Array[x, n];
AbsoluteTiming[{max, vals} =
NMaximize[{g[vars], Thread[0 <= vars <= 1]},
Map[{#, 0, 1} &, vars], MaxIterations -> 5000];]
max
best = Ordering[vars /. vals]
N[f[best]]
(* During evaluation of In[140]:= NMaximize::cvmit: Failed to converge to the requested accuracy or precision within 5000 iterations.
Out[142]= {62.699518, Null}
Out[143]= 636.619153268
Out[144]= {9, 40, 46, 2, 19, 47, 53, 77, 97, 87, 21, 33, 71, 35, 95, \
73, 39, 28, 52, 43, 6, 75, 5, 20, 27, 31, 22, 64, 49, 83, 42, 38, 92, \
58, 65, 79, 30, 11, 12, 13, 7, 66, 86, 67, 41, 4, 72, 100, 60, 10, 1, \
48, 81, 8, 84, 55, 36, 32, 25, 96, 70, 44, 80, 16, 18, 68, 29, 88, \
89, 15, 91, 69, 23, 17, 82, 90, 94, 93, 50, 99, 59, 85, 74, 62, 56, \
26, 24, 34, 78, 3, 98, 63, 14, 61, 51, 76, 45, 54, 37, 57}
Out[145]= 636.619153268 *)
Could of course instead minimize in the same manner. Also there are numerous variations one might try, using option and method sub-option settings for NMinimize
.
it seems that Objective Function must return Numeric Value,not Symbolic expression.
f[samp_?ListQ] :=
Total@Total@
Table[Table[(-1)^(i), {i, 1, Length[samp]}]*
Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}]
Nf[samp_?ListQ] :=
N@Total@Total@
Table[Table[(-1)^(i), {i, 1, Length[samp]}]*
Reverse@Cos[Mod[samp, n]]*
Mod[samp, n], {n, {3, 5, 7, 11, 13, 17, 23}}]
Print[forwardDP[f, Range[1, 100]] // f // N]
-118.075
Print[forwardDP[Nf, Range[1, 100]] // Nf]
1164.08
The first thing that came to mind is the heuristic.
The other is approximated dynamic programming.
Heuristic
Easy and Fast Heuristic Implementation.
Table[
Nest[
With[{try = RandomSample@Range[100]},
tryvalue = f[try];
If[#2 >= tryvalue, {#1, #2},
{try, tryvalue}]] & @@ # &,
{1, -10000}, 500],
{100}
] // MaximalBy[#, #[[2]] &] & // Flatten[#, 1] &
(*no elements should be duplicate.*)
Not@*Equal @@ # & /@ Subsets[First@%, {2}] // And @@ # &
=>
True
Dynamic Programming(forward)
forwardDP[obj_, action_?(VectorQ[#, IntegerQ] &)] :=
Block[{solution, nothing, tryaction},
solution = ConstantArray[nothing, Length@action];
Do[solution[[index]] = First[First[Table[solution[[index]] = trynum;
tryaction =
Join[DeleteCases[solution, nothing],
DeleteCases[action, x_ /; ContainsAny[solution][{x}]]];
{trynum, obj[tryaction]}, {trynum,
DeleteCases[action,
x_ /; ContainsAny[DeleteCases[solution, nothing]][{x}]]}] //
MaximalBy[#, #[[2]] &] &]], {index, Range[1, Length@action]}];
solution];
forwardDP[f, Range[1, 100]] // AbsoluteTiming
f[%]
=>
608
Not@*Equal @@ # & /@ Subsets[%%, {2}] // And @@ # &
=>
True
About feasible region of control/action,please modify the code around DeleteCases
of trynum
and tryaction
for your problem.