Half iteration of exponential function
There is a method using Carlemanmatrices which gives increasingly well approximations.
Consider the family of polynomials $$g_t(x) = 1+ x + \frac{x^2}{2!} +...+ \frac{x^t}{t!} $$
with the attempt to find increasingly precise approximations with increasing $t$
$$ f_t(f_t(x)) \approx g_t(x) \approx e^x $$
For some chosen $t$ define the Carlemanmatrix $G$ for $g_t(x)$ (I use a version which is transposed against the wikipedia-entry), for instance $t=2$
$$ G_2 = \left[\small \begin{array} {}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
0 & 1/2 & 2 & 9/2 & 8 & 25/2 & 18 & 49/2 \\
0 & 0 & 1 & 4 & 10 & 20 & 35 & 56 \\
0 & 0 & 1/4 & 9/4 & 17/2 & 45/2 & 195/4 & 371/4 \\
0 & 0 & 0 & 3/4 & 5 & 37/2 & 51 & 469/4 \\
0 & 0 & 0 & 1/8 & 2 & 45/4 & 41 & 931/8 \\
0 & 0 & 0 & 0 & 1/2 & 5 & 51/2 & 92
\end{array}\right]$$
We see the coefficients of our function $g_2(x)$ in the second column, that of $g_2(x)^0 = 1$ in the fist columns, that of $g_2(x)^2 $ in the third , that of $g_2(x)^3 $ in the fourth column and so on. The key is here, that with a vector $V(x)=[1,x,x^2,x^3,...]$ up to the correct dimension we can do the dotproduct $$ V(x) \cdot G_2 = V(g_2(x))$$
Actually, when we using software then the columns of higher indexes are always truncated versions of the powers of $g_2(x)$ so empirically we must take the approximations with a grain of salt.
THe key is now, that because of the form of the Carlemanmatrices, the "output" has the same form as the "input", and we can repeat the application like
$$ V(x) \cdot G_2 = V(g_2(x)) \\
V(g_2(x)) \cdot G_2 = V(g_2°^2(x)) \\
V(g_2°^2(x)) \cdot G_2 = V(g_2°^3(x)) \\
$$
or more concisely, because we have associativity in the matrix-products
$$V(x) \cdot G_2^h = V(g_2°^h(x)) $$
We see, that the h'th power of $G_2$ gives the h'th iterate of $g_2(x)$ and we can assume, that inserting $h=1/2$ gives -at least- an approximation to $g_2°^{1/2}(x)=f(x)$
What we need is a matrix-function for finding the square-root; this can either be done by diagonalization (implemented in Pari/GP and larger CAS-systems) of by Newton-iteration.
What I find for $G_2^{0.5} $ is
1.0000000 0.49649737 0.24650274 0.12207723 0.060459565 0.030432921 0.015814772 0.0088358869
0 0.88272304 0.87964476 0.65278437 0.42951272 0.26484713 0.15906693 0.097969211
0 0.29626378 1.0688901 1.3895246 1.2985405 1.0259175 0.73517218 0.50088082
0 -0.073304928 0.44254563 1.4092345 2.1282729 2.3110280 2.0617264 1.6127751
0 0.020386654 -0.0089884907 0.62013806 1.9423663 3.2482288 3.8802221 3.6776250
0 -0.0023714159 -0.0087696684 0.062271773 0.87615938 2.7873325 5.0295289 6.2944776
0 -0.00064038818 0.0039322053 -0.0081032492 0.14270358 1.2522409 4.3026809 8.3339822
0 0.00019462828 -0.00056495459 -0.000028314112 0.0028871718 0.23073290 1.8777280 8.6538082
and we see, that the coefficients in the second column gives some approximation to Sheldon's halfe
- function.
Also the first three coefficients $(c=) 0.49649737 , (b=) 0.88272304, (a=) 0.29626378 $ give some approximation to the values of $(c,b,a)$ which I gave in my earlier comment and which solve your system of equations in (3.1) to (3.2).
Now if we do a better approximation of $g_t(x) $ to the true $\exp(x)$ - function, by, say, $t=8$ we get better approximations to Sheldon's Kneser-solution. Let $G_8$ be defined with size 16x16 then we get its top left:
1 1 1 1 1 1
0 1 2 3 4 5
0 1/2 2 9/2 8 25/2
0 1/6 4/3 9/2 32/3 125/6
0 1/24 2/3 27/8 32/3 625/24
0 1/120 4/15 81/40 128/15 625/24
0 1/720 4/45 81/80 256/45 3125/144
0 1/5040 8/315 243/560 1024/315 15625/1008
0 0 127/20160 1093/6720 5461/3360 19531/2016
0 0 41/30240 3271/60480 5459/7560 32549/6048
0 0 19/75600 3247/201600 21809/75600 162697/60480
0 0 1/25200 871/201600 3953/37800 14779/12096
0 0 19/3628800 2533/2419200 62843/1814400 24589/48384
0 0 1/1814400 37/161280 2389/226800 47129/241920
0 0 1/25401600 1537/33868800 7499/2540160 702839/10160640
0 0 0 4099/508032000 72803/95256000 3481427/152409600
and the square-root $G_8^{0.5} $ is
1.0000000 0.49857405 0.24856073 0.12386855 0.061729127 0.030790812
0 0.87630311 0.87401746 0.65347137 0.43393924 0.27003773
0 0.24751412 1.0147132 1.3342705 1.2683840 1.0263147
0 0.024641942 0.45793460 1.3404901 2.0049681 2.2203057
0 -0.00094891787 0.10348608 0.72416179 1.8842050 3.0275836
0 0.00022657746 0.010924205 0.23298583 1.1130489 2.7397893
0 0.000077390020 0.00063916208 0.046228369 0.43595415 1.7059468
0 -0.000032210830 0.00014299601 0.0059529659 0.11748258 0.75561802
0 -0.000014796522 -0.000021914892 0.00061375239 0.022696178 0.24536210
0 0.000011471527 -0.000021337036 0.000018742074 0.0033435423 0.060458407
0 -0.0000014276569 0.0000089535786 -0.000019453706 0.00035980043 0.011807188
0 -0.00000088358673 0.00000019051670 0.0000042134686 0.000016547312 0.0018917104
0 0.00000031550433 -0.00000099866936 0.00000031495830 0.0000035791002 0.00026146045
0 -0.00000000068329542 0.00000026181266 -0.00000053463876 -0.00000019767557 0.000038804566
0 -0.000000014747905 -0.000000019088871 0.00000011141548 -0.000000094870895 0.0000061666156
0 0.0000000019555565 -0.00000000099414503 -0.0000000090606767 0.000000019074856 0.0000014943166
The coefficients in the second column are now better approximations to Sheldon's solutions and give a better solution for $f(x)$ to approximate $f(f(x))\approx \exp(x)$
You see the principle. Ideally the Carlemanmatrix is of infinite size and also the polynomial is of infinite order (or better: equals the exponential-series).
By the logic of the Carleman-matrices the following method seems to be more inaccurate, but the approximation pattern towards the Kneser-solution seems to be even better.
Here is a list for the coefficients for $f_t(x)$ for $t=3..16$ where the Carlemanmatrices $G_t$ are also truncated to size $t \times t$ (and not $2t \times 2t$, $3t \times 3t$ or the like). I've written them horizontally for better visual comparability of approximation towards the Kneser-solution:
t at x^0 at x^1 x^2 x^3 x^4 x^5
3 0.50000000 0.89442719 0.22360680 . . .
4 0.49944144 0.88075164 0.23809540 0.022538920 . .
5 0.49907754 0.87768412 0.24309637 0.024235749 0.00082874617 .
6 0.49887415 0.87676479 0.24517938 0.024772649 0.00013110906 0.000089433779
7 0.49875947 0.87644601 0.24618449 0.024898334 -0.00030603063 0.00011382794
8 0.49869216 0.87632858 0.24671872 0.024893887 -0.00055911402 0.00013292704
9 0.49865090 0.87628661 0.24702315 0.024852593 -0.00070934134 0.00015154857
10 0.49862460 0.87627466 0.24720567 0.024805640 -0.00080017819 0.00016953631
11 0.49860726 0.87627481 0.24731954 0.024763154 -0.00085630807 0.00018510631
12 0.49859549 0.87627959 0.24739292 0.024727523 -0.00089167888 0.00019807499
13 0.49858729 0.87628581 0.24744148 0.024698565 -0.00091432971 0.00020860446
14 0.49858146 0.87629209 0.24747433 0.024675319 -0.00092902883 0.00021706224
15 0.49857724 0.87629790 0.24749699 0.024656725 -0.00093865651 0.00022382486
16 0.49857412 0.87630304 0.24751287 0.024641836 -0.00094499831 0.00022922925
Kneser
KN: 0.49856329 0.87633613 0.24755219 0.024571812 -0.00095213638 0.00025333982 ...
So I think, the Kneser-solution is the limit for this process when $t \to \infty$
Here is a short problem I proposed two years ago to my students (translated from french ...)
I can add later a detailed solution if required.
I originally found this material somewhere on the web but don't remember where.
Let's make the assumption that there exists a continuous map $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $f\circ f=\exp.$
We will prove, step by step, several properties of $f$.
- Prove that $\forall x\in\mathbb{R},\thinspace e^{x}>x.$
- Prove that the equation $f\left(x\right)=x$ doesn't have any solution.
- Deduce that $\forall x\in\mathbb{R},\thinspace f\left(x\right)>x.$
Prove finally that $\forall x\in\mathbb{R},\thinspace f\left(x\right)<e^{x}.$
Prove that $\forall x\in\mathbb{R},\thinspace f\left(e^{x}\right)=e^{f\left(x\right)}.$
Compute ${\displaystyle \lim_{+\infty}f}.$
- Prove that there exists $\lambda<0$ such that ${\displaystyle \lim_{-\infty}f=\lambda.}$
- Prove that $f$ is strictly increasing.
I've written a few programs that calculate Kneser's sexp and slog functions. One of my recent efforts is fatou.gp a pari-gp program that calculates the slog or Abel function for $\exp(x)$, for a wide range of real and complex bases. Kneser's construction requires calculating a Riemann mapping which is very tricky to get accurate numerical results, so instead I do an equivalent iterated 1-cyclic $\theta(z)$ mapping from the $\alpha(z)$ or Abel functions at the two fixed points. Here is the Taylor series for the half iterate of $\exp(x)$. You can compare these results with your system of equations. $$f(f(x))=\exp(x);\;\;\;f(x)=\text{sexp}(\text{slog}(x)+0.5)$$
{halfe= 0.498563287941114
+x^ 1* 0.876336132224813
+x^ 2* 0.247552187310898
+x^ 3* 0.0245718116969028
+x^ 4* -0.000952136380204206
+x^ 5* 0.000253339819008525
+x^ 6* 7.09275516366956 E-5
+x^ 7* -4.81808433402200 E-5
+x^ 8* 2.63228465405932 E-6
+x^ 9* 5.96598826774286 E-6
+x^10* -1.30879479719986 E-6
+x^11* -7.47165552015529 E-7
+x^12* 2.68510892327235 E-7
+x^13* 1.12440534247329 E-7
+x^14* -4.80789869461353 E-8
+x^15* -2.20118629742874 E-8
+x^16* 8.17933994010676 E-9
+x^17* 5.30688749879415 E-9
+x^18* -1.23819700193839 E-9
+x^19* -1.41844961463076 E-9
+x^20* 1.05287927108075 E-10
+x^21* 3.89632939104118 E-10
+x^22* 3.51707444355649 E-11
+x^23* -1.04753098725701 E-10
+x^24* -2.89321996209624 E-11
+x^25* 2.62480364845324 E-11
+x^26* 1.38848625050719 E-11
+x^27* -5.58405052292307 E-12
+x^28* -5.57754465436342 E-12
+x^29* 6.94355445214551 E-13
+x^30* 2.00345639417985 E-12
+x^31* 1.89932160713883 E-13
+x^32* -6.47541759020915 E-13
+x^33* -2.08549856161407 E-13
+x^34* 1.82048262120683 E-13
+x^35* 1.15807785730247 E-13
+x^36* -3.92069387382815 E-14
+x^37* -5.14573309824226 E-14
+x^38* 2.35454506494555 E-15
+x^39* 1.97495294136643 E-14
+x^40* 3.87021261821085 E-15 }