Calculating the nth super-root when n is greater than 2?
One thing we can do is render the third superroot as an iteration of second superroots.
Let $x^{x^x}=a$. Then
$(x^x)^{(x^x)}=a^x$
and we take two square superroots to get a fixed point iteration:
$\color{blue}{x=\sqrt{\sqrt{a^x}_s}_s}$
If we put $a>1$ and $x=1$ on the right side, we get
$x=\sqrt{\sqrt{a}_s}_s$
as our next iteration, and this will lie between $1$ and the true third superroot. This suggests that the scheme will be convergent.
Let's try this on the third superroot of $16$. Put $x=1$ on the right side of the fixed point relation above. Then we get
$x=\sqrt{\sqrt{16}_s}_s=1.769\text{ (4 digits)}$
Next time around we get to the same accuracy
$x=\sqrt{\sqrt{134.9}_s}_s=1.958\text{ (4 digits)}$
We seem to be converging to $2$ fairly rapidly.
For the third superroot of 2 the corresponding iterations are $1.000, 1.380, 1.459, 1.473, 1.476, \color{blue}{1.477},...$.
For smaller $a$ values the performance deteriorates, an outcome connected with square superroots being no longer single valued when the argument drops below 1. If $a=0.9$ and an initial value $x=1$ is tried, the iteration converges to about $0.890$ if we take square superroots $\ge 1/e$; but this does not work for $a=0.8$ due to one of the square superroots being undefined over the reals.
Just for sake of simplicity, it is possible to compute $\sqrt[n]a_s$ very easily when $a>1$ or $n$ is odd and $a>0$ using bisection, which doesn't involve any hard calculations (so this is something doable with a calculator that only has exponentiation, and perhaps some paper).
As long as you have $x\le\sqrt[n]a_s\le y$, we can iteratively consider $[(x+y)/2]\widehat~\widehat~n$, and replace either $x$ or $y$ with $(x+y)/2$.
To find initial values to work with, note that $a\ge\sqrt[n]a_s$. From there, repeatedly divide by 2 until you find a value where $x\le\sqrt[n]a_s$, and set $y$ to be $2x$. Then apply the above.
Here is a simple program that implements this. It takes 53 steps to compute the result
$$\sqrt[3]2_s\approx1.4766843373578697$$
It is also notable that the above program may fail on larger inputs, since computing things like 256^^3 may not be reasonably doable. For a human, however, if the radicand is small enough, it is easy to find initial values manually.
Since some programming languages give $a^b=\infty$ if $a$ and $b$ are too large, and $\infty$ is considered greater than anything, there is no need to worry about larger inputs. This allows for the computation of things such as $\sqrt[7]{256}_s$ in only 60 steps. See here.
Improvement:
One can employ a faster root-finding algorithm that works well against functions with large second derivatives to get faster practical use. The idea is similar to the above, where we define a new function
$$f(x)=x\widehat~\widehat~n-a$$
which we want to find the root of. We use the initial boundaries
$$(x_l,x_u)=(\min(1,a),\max(1,a))$$
or $x_u=2$ if $f(2)>0$, and then iteratively define the bisection and regula falsi midpoints:
$$x_\mathrm B:=\frac{x_u+x_l}2$$
$$x_\mathrm{RF}:=\frac{x_uf(x_l)-x_lf(x_u)}{f(x_l)-f(x_u)}=\frac{\frac{x_u}{f(x_u)}-\frac{x_l}{f(x_l)}}{\frac1{f(x_u)}-\frac1{f(x_l)}}\stackrel{f(x_u)\to\infty}\longrightarrow x_l$$
where $x_\mathrm{RF}:=x_l$ is used if $f(x_u)$ is infinite in double precision. We then define our new endpoint as
$$x_r:=\begin{cases}\frac12(x_\mathrm B+x_\mathrm{RF}),&\text{steps}\bmod3\ne0\text{ or }x_u-x_l>0.5\text{ or }|f(x_\mathrm{RF})|>0.5\\x_\mathrm{RF},&\text{else}\end{cases}$$
$$x_l:=x_r\text{ if }f(x_r)<0$$
$$x_u:=x_r\text{ if }f(x_r)>0$$
which converges roughly twice as fast as bisection at worst and slightly faster than linear convergence asymptotically. See here for an implementation of the above, which computes $\sqrt[3]2_s$ and $\sqrt[7]{256}_s$ in 9 and 21 iterations respectively.
There is also an (individual) powerseries solution (Puisieux-series) for each $n$ separately. Unfortunately that series have a little radius of convergence (if nonzero at all), but might be summable using Euler-summation. I'll give an example for $n=3$. (More examples are in my small treatize on my webspace)
Let's define our basic function $$ v = f_3(u) = u \cdot \exp( u \cdot \exp(u))$$ This has a power series that Pari/GP would easily display to many, say $64$, coefficients. It is Lagrange-invertible, say $$ u = g_3(v) = g_{3,1} v + g_{3,2}/2! v^2 + g_{3,3}/3! v^3 + ... \qquad \qquad \\\ \qquad \qquad\qquad \qquad = v - 2/2! v^2 + 3/3! v^3 + 20/4! v^4 - 295/5! v^5 + 1554/6! v^6 + O(v^7) $$ Let's call the $n$'th superroot-function $r_n(y)$ such that $$ x = r_n(y) \to y = \;^n x $$ then with $n=3$ we have asymptotically (or with small range of convergence) $$ r_3(y) = \exp(g_3(\log(y))) $$
By visual impression using the first 64 coefficients it seems, as if $|y| \lt \exp(1/2)$ might give convergence. (Update: checked with $\small y=\exp(\exp(-1))\approx 1.444 $ and it seems to be computable without Euler-summation at all)
Using Euler-summation the radius of convergence seems to be extensible - but I do not have the exact estimate for the growth of the coefficients.
Let's see the summation of $g_3(v)$ for $y=\exp(1/2)$ such that $v = \log(y) =1/2$
partial sums of series g_3(1/2)
index direct par- Eulersummati- x=exp(g_3(v))
tial sums on order 0.5
0 0 0 1.000000000
1 0.5000000000 0.2500000000 1.284025417
2 0.2500000000 0.3125000000 1.366837941
3 0.3125000000 0.3203125000 1.377558184
4 0.3645833333 0.3196614583 1.376661628
5 0.2877604167 0.3198649089 1.376941739
6 0.3214843750 0.3205749512 1.377919773
7 0.3465603299 0.3209987217 1.378503819
8 0.2958108569 0.3211143463 1.378663217
9 0.3245950850 0.3211192669 1.378670001
10 0.3403287830 0.3211180191 1.378668280
11 0.2981245348 0.3211251293 1.378678083
12 0.3269177171 0.3211317745 1.378687245
13 0.3373635102 0.3211344667 1.378690956
14 0.2982079698 0.3211348582 1.378691496
... ... ... ...
55 0.1639008657 0.3211350702 1.378691789
56 0.2430865079 0.3211350702 1.378691789
57 0.5951181182 0.3211350702 1.378691789
58 0.09819587958 0.3211350702 1.378691789
59 0.2552736284 0.3211350702 1.378691789
60 0.6589999027 0.3211350702 1.378691789
61 0.01025715717 0.3211350702 1.378691789
62 0.2806542398 0.3211350702 1.378691789
63 0.7342007399 0.3211350702 1.378691789
64 -0.1064953930 0.3211350702 1.378691789
65 0.3257696112 0.3211350702 1.378691789
66 0.8210482830 0.3211350702 1.378691789
giving the result
\\ v = 1/2
\\ y = exp(v) = 1.648721271
\\ x = 1.378691789... from the last entry in the above (exp() of Euler-summation)
x^x^x \\ check result of exp(Eulersummation)
%368 = 1.648721271
x^x^x- y \\ check error
%369 = -1.699715302 E-22
Using Euler-summation of higher order and still using $64$ coefficients I could approximate $x$ for $y=16$ thus $v=\log(16) \approx 2.772588722 $ to a couple of digits
partial sums of series g_3(v)
index direct par- Eulersummati- x=exp(g_3(v))
tial sums on order 5.8
0 0 0 1.000000000
1 2.772588722 0.4138192123 1.512583646
2 -4.914659500 0.5946280226 1.812356664
3 5.742129363 0.6381957710 1.893062278
4 54.98695039 0.6360439550 1.888993136
5 -347.7931741 0.6382424852 1.893150713
6 632.6698779 0.6531183317 1.921523444
7 4675.314968 0.6695911376 1.953438470
8 -40693.25817 0.6786063818 1.971128816
9 101996.7689 0.6804959536 1.974856927
10 534495.9511 0.6805394727 1.974942873
11 -5898689.882 0.6823339559 1.978490056
12 18438758.44 0.6856034795 1.984969362
13 67398852.83 0.6883322388 1.990393263
14 -950277060.1 0.6894759540 1.992671008
... ... ... ...
55 -2.664574273E40 0.6931453682 1.999996375
56 9.487882296E39 0.6931457660 1.999997171
57 9.002509460E41 0.6931461912 1.999998021
58 -6.072183170E42 0.6931464204 1.999998480
59 6.149361593E42 0.6931464569 1.999998553
60 1.803356223E44 0.6931464662 1.999998571
61 -1.371747030E45 0.6931465681 1.999998775
62 2.215487571E45 0.6931467349 1.999999109
63 3.558078850E46 0.6931468656 1.999999370
64 -3.073667769E47 0.6931469109 1.999999461
65 6.704426035E47 0.6931469096 1.999999458
66 6.882982587E48 0.6931469259 1.999999491
We see (and conclude for the general trend), that direct evaluation of the series $g_3(v)$ leads to unbounded partial sums, but which might be handled successfully by Eulersummation of appropriate order. For negative $v$ this might look different because Euler-summation can only be successful if the summands-to-be-partial-summed are (at least roughly) alternating in signs.
The same procedure can be done for higher $n$, getting simply different power series, and for $n \to \infty$ we get a well known powerseries with simple set of coefficients, see my small treatize which I have linked to.
update 2 Just for fun I tried $y=256$, $v \approx 5.545177444$. Using Eulersummation of order $11.37$ I got the following, with now $128$ terms of the power series needed:
partial sums of series g_3(v)
index direct par- Eulersummati- x=exp(g_3(v))
tial sums on order 11.37
0 0 0 1.000000000
1 5.545177444 0.4482762688 1.565611165
2 -25.20381545 0.6593619374 1.933558210
3 60.05049546 0.7137176680 2.041567036
4 847.9676320 0.7103553682 2.034714203
5 -12040.99635 0.7136655574 2.041460652
6 50708.63898 0.7376632728 2.091043603
7 568167.2105 0.7661483392 2.151463567
8 -11046187.51 0.7827563692 2.187493503
9 62011106.34 0.7863107520 2.195282527
10 504890269.0 0.7863264545 2.195316998
11 -1.267027432E10 0.7906504964 2.204830194
12 8.701591401E10 0.7990721357 2.223476886
13 4.880970073E11 0.8065436624 2.240151869
14 -1.618550515E13 0.8098085843 2.247477743
... ... ... ...
115 6.351017290E120 0.8292221643 2.291535609
116 1.111485492E123 0.8292222056 2.291535703
117 -1.527759513E124 0.8292223349 2.291536000
118 3.224076332E124 0.8292224524 2.291536269
119 1.850204173E126 0.8292224818 2.291536336
120 -2.865500371E127 0.8292224486 2.291536260
121 9.519211594E127 0.8292224379 2.291536236
122 3.024663081E129 0.8292224940 2.291536364
123 -5.316849919E130 0.8292225794 2.291536560
124 2.371536870E131 0.8292226279 2.291536671
125 4.834907255E132 0.8292226204 2.291536654
126 -9.764577483E133 0.8292225984 2.291536603
127 5.423935796E134 0.8292226105 2.291536631
128 7.506467361E135 0.8292226592 2.291536743
129 -1.775490700E137 0.8292227055 2.291536849
130 1.177132710E138 0.8292227171 2.291536875
Meaning the result
\\ y = 256
\\ v = log(y) ~ 5.545
\\ x = 2.291536875... from the last entry in the above (exp() of Euler-summation)
x^x^x \\ check result of exp(Eulersummation)
%465 = 255.9990213
Using $256$ series coefficients and increased internal precision I got the result correct to even the eleventh digit.
So this as an initial solution and then Newton-interpolation (perhaps somehow following the idea of Oscar Lanzi) one might arrive at arbitrary accuracy - again: I did not check the lower bound for $y$...