Given a polynomial-time algorithm, can we compute an explicit polynomial time bound just from the program?

[Edit: A bug in the original proof has been fixed, thanks to a comment by Francois Dorais.]

The answer is no. This kind of thing can be proved by what I call a "gas tank" argument. First enumerate all Turing machines in some manner $N_1, N_2, N_3, \ldots$ Then construct a sequence of Turing machines $M_1, M_2, M_3, \ldots$ as follows. On an input of length $n$, $M_i$ simulates $N_i$ (on empty input) for up to $n$ steps. If $N_i$ does not halt within that time, then $M_i$ halts immediately after the $n$th step. However, if $N_i$ halts within the first $n$ steps, then $M_i$ "runs out of gas" and starts behaving erratically, which in this context means (say) that it continues running for $n^e$ steps before halting where $e$ is the number of steps that $N_i$ took to halt.

Now if we had a program $P$ that could compute a polynomial upper bound on any polytime machine, then we could determine whether $N_i$ halts by calling $P$ on $M_i$, reading off the exponent $e$, and simulating $N_i$ for (at most) $e$ steps. If $N_i$ doesn't halt by then, then we know it will never halt.

Of course this proof technique is very general; for example, $M_i$ can be made to simulate any fixed $M$ as long as it has gas but then do something else when it runs out of gas, proving that it will be undecidable whether an arbitrary given machine behaves like $M$.


You can also diagonalize directly against a purported bound-producing algorithm. Say that $R(j)$ returns a polynomial when run with any index $j$ of a polynomial time function as input.

Define a function $B(j,n)$ as follows. On input $n$, run $R(j)$ for $n$ steps. If this doesn't halt, return $0$ immediately. Otherwise, if $R(j)$ does not return a polynomial when it halts, return $0$ immediately. Otherwise, if the polynomial is $p(x)$, waste at least $(n+p(n))^2$ steps and then return $0$.

Note that for any $j$, the function $C_j(n) = \lambda n . B(j,n)$ is total and runs in polynomial time, and if $R(j)$ returns a polynomial then this is not a bound on the running time of $C_j(n)$.

Now the function that takes a number $j$ and returns an index for the $C_j$ is a total computable function. So we can use the recursion theorem to produce an index $k$ such that $\phi_k(n) = C_k(n)$. Then $\phi_k$ will be a total polynomial-time function, but if $R(k)$ returns a polynomial then this is not an upper bound for $\phi_k$.

Note: The previous paragraph requires more than the usual statement of the recursion theorem, it requires some knowledge of the proof to show that $\phi_k$ is polynomial-time. Here is the construction I need.

Let $s(j,k)$ be the usual polynomial-time function such that $\phi_{s(j,k)}(n) \simeq \phi_j(k,n)$; the key point we need is that the running time of $\phi_{s(j,k)}(n)$ is polynomially bounded if the running time of $\phi_j(k,n)$ is polynomially bounded, and the first of these is not smaller than the second. This can be checked by examining the construction of $s$ in the chosen model of computation.

Now let $d$ be the index for the computable function $\phi_d(j,n) = B(s(j,j),n)$ obtained by simple composition. Let $k = s(d,d)$. Then $\phi_k(n) = \phi_d(d,n) = B(k,n)$ as desired; this is the proof of the recursion theorem. Moreover, the implementation of these functions ensures that $\phi_k(n)$ runs in polynomial time but not faster than $B(k,n)$, because each computation of $\phi_k(n)$ consists of some polynomial-time-in-$n$ invocations of $s$ functions followed by the literal execution of the program for $B(k,n)$.