Is it legitimate to specify a statement as a Theorem if it is proved using numerical methods (since it can't be proved analytically)?
In general, numerical methods don't constitute proofs. If all we have is an unknown blackbox function $f:\mathbb{R} \to \mathbb{R}$ that we know nothing about, and all we can do is compute its value at (finitely) many points, then we simply can't prove that $f$ is positive.
However, in specific cases, we could have arguments based on numerical methods that are valid. Typically, we'd need to make a numerical approximation, and then prove, using non-numerical methods, that our numerical approximation is accurate enough for the theorem to follow. As such, how numerical methods can aid us in proving a statement is very statement-specific.
Take, for example the following problem: Prove that $f(x) = x^2 + 1 > 0 \ \forall x \in [-1, 1]$.
Invalid proof: We computed $f(x)$ at $10^{10^{10000}}$ random points and used linear interpolation between them. Here's a plot. We can see that $f(x)$ is always positive.
Valid proof 1: We computed $f(x)$ at points three points: $f(-1) = 2$, $f(0) = 1$, and $f(1)=2$. Let $g(x)$ be the linear interpolation of the points $(-1, 2)$, $(0, 1)$, and $(1, 2)$. $g$ attains its minimum at $g(0) = 1$. Since $f^{\prime \prime} = 2$, we can compute an error bound on our interpolation (see https://ccrma.stanford.edu/~jos/resample/Linear_Interpolation_Error_Bound.html): $|f(x) - g(x)| \leq \frac{2}{8}$. Therefore, we can conclude that $f(x) \geq \frac{3}{4} > 0$.
Note: Often, if we need to resort to numerical methods, if would be just as hard to compute derivatives. However, we don't need the actual derivatives, we just need an upper bound. The better the bound, the less points we would need to evaluate $f(x)$ at. Furthermore, bound to the first derivative is enough, but having second could also reduce the number of points needed.
Valid proof 2: We know that $f(x)$ is convex. We use a numerical method to compute its minimum. find that $\min f(x) \approx 1.0000000075$. We also have an (true, non-numerical) error bound on our approximation: $|1.0000000075 - \min f(x)| < 0.001$. Therefore, $f(x) > 1.0000000075 - 0.001 > 0$.
Finally, it doesn't really matter whether analytical proofs exist or not. The validity of any proof is only determined by that proof and no others.
In fact, it has been proven that not all true statements can be proven. But that is no reason to reduce our standards of rigor.
Well, is it proven or is it not proven? The way you've phrased your question is "if I've proven something with numerical methods, have I proven it?". Well, yes - you just said you had.
Say you want to prove that $f(n)$ is a prime number for $n<10^7$. Well then if you loop through all the numbers less than $10^7$ and check that $f(n)$ is prime for all of them, you have a proof. It isn't somehow less of a proof just because it doesn't involve a bunch of algebraic symbols and words like "therefore" and "by contradiction".
However, if you want to prove that $f(n)$ is a prime number for all numbers, and you check for $n\leq10^7$, that simply isn't a proof. There's no sophisticated philosophical reason why it isn't, it just clearly isn't, is it? I mean, $f(10^7+1)$ might not be prime, for all you know.
So what you really should be asking is whether or not you actually have proven your statement.
And it turns out that with further analysis of your actual problem you could have found an exact proof with first semester calculus means. There is no need for speculative numerical explorations.
For your actual problem, to find solutions $x(γ,y)>0$ for the equation $$ (γ^{n−1}y)^n+(γ^{n−2}y)^{n−1}+...+(γy)^2+y=A(γ,y)=x^n+x^{n−1}+...+x, $$ it is rather easy to answer that there is exactly one positive root $x$ for fixed positive $y$ and $γ$.
The right side at $x=0$ is zero and grows monotonically and convexly towards infinity, thus there is a positive root of the equation by the intermediate value theorem. Further, there is a lower bound $$ x>\frac{A}{1+A}\ge\frac{y}{1+y} $$ for the positive root, as either $x\ge 1$ or, for $0<x<1$, $$ A(1-x)=x(1-x^n)<x\implies x>\frac{A}{1+A} $$
In general, there are methods to determine the number of positive roots like Descartes rule or the Hurwitz criteria of stability. Descartes rule counts the sign changes in the coefficient sequence. Here $x^n+...+x^2+x-A=0$ has exactly one sign change which proves the existence of exactly one positive root.
The polynomial equation has lower root bounds $a_nx^n+...+a_1x+a_0=0$ $$ |x|\ge\frac{|a_0|}{|a_0|+\max_{k=1..n}|a_k|} ~~ \text{ or } ~~ |x|\ge\frac{|a_0|}{\max(|a_0|,\,|a_1|+...+|a_n|)} $$ which are lower bound for a single positive root.