A probabilistic angle inequality
Normalize $q$ such that $q^Tq=1$ and $q_i\geq 0$, for all $i=1,\ldots,n$. Let $X_i=q_ip_i$, $i=1,\ldots,n$. We must find an absolute constant $c>0$ such that $$P\left(c\left(\sum_i X_i\right)^2\geq \frac{\sum_ip_i^2}{n}\right)\geq \frac 1 2.$$
By the weak law of large number $\frac{\sum_ip_i^2}{n} \to E[p_1^2]<1$ in probability. Therefore, it is sufficient to prove that $$P\left(\left\vert\sum_i X_i\right\vert< c^{-\frac 1 2} \right)< \frac 1 3,$$ for large enough constant $c>0$.
Assume wlog that $q_1=\max_i q_i$. Since $X_1$ is uniformly distributed in an interval of length $2q_1$ (independently of $\sum_{i>1}X_i$), $$P\left(\left\vert\sum_i X_i\right\vert< \frac {q_1} 4 \middle \vert \sum_{i>1}X_i\right)\leq \frac 1 4. $$ This concludes the proof in the case that $q_1$ is bounded away from 0. If $q_1$ is arbitrarily close to 0, then the distribution of $\sum_i X_i$ is arbitrarily close to the standard normal distribution, by Berry-Essen Inequality; therefore choosing $c=100$, say, will do.
$\newcommand{\de}{\delta} \newcommand{\De}{\Delta} \newcommand{\ep}{\varepsilon} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\la}{\lambda} \newcommand{\Si}{\Sigma} \newcommand{\thh}{\theta} \newcommand{\R}{\mathbb{R}} \newcommand{\X}{\mathcal{X}} \newcommand{\E}{\operatorname{\mathsf E}} \newcommand{\PP}{\operatorname{\mathsf P}} \newcommand{\ii}[1]{\operatorname{\mathsf I}\{#1\}}$
Let us show more: For each real $\ep\in(0,1)$ there is some real $c>0$ such that for any nonzero $q=(q_1,\dots,q_n)\in\R^n$ we have \begin{equation*} \PP((p^Tp)(q^Tq)\le cn(p^Tq)^2)\ge1-\ep, \tag{0} \end{equation*} where $p:=(U_1,\dots,U_n)$ and $U_1,\dots,U_n$ are iid random variables (r.v.'s) uniformly distributed in $[-1,1]$. Here, instead of $1/2$ in the OP's question, we have $1-\ep$.
By rescaling, without loss of generality (wlog) $q^Tq=1$. Also, $p^Tp\le n$, so that the inequality $(p^Tp)(q^Tq)\le cn(p^Tq)^2$ is implied by $(p^Tq)^2\ge1/c$.
So, letting $$\de=2/\sqrt c,$$ we see that it is enough to show that $\forall\ep>0$ $\exists\de>0$ $\forall q\in\Si_{n-1}$ $\PP(|S_q|\le\de/2)\le\ep$, where $\Si_{n-1}$ is the unit sphere in $\R^n$ and \begin{equation*} S_q:=p^Tq=\sum_1^n q_i U_i. \end{equation*}
In turn, it is enough to show that \begin{equation*} Q_{S_q}(\de)\underset{\de\downarrow0}\longrightarrow0 \tag{1} \end{equation*} uniformly in $q\in\Si_{n-1}$, where \begin{equation*} Q_X(\de):=\sup_{x\in\R}\PP(|X-x|\le\de/2), \end{equation*} so that $Q_X$ is the so-called concentration function of a r.v. $X$.
Let us now prove (1). Because the r.v.'s $U_1,\dots,U_n$ are iid and symmetrically distributed, wlog \begin{equation*} q_1\ge\cdots\ge q_n\ge0. \end{equation*} Take any $q_*\in(0,1)$. One of the following cases must occur.
Case 1: $q_1\ge q_*$. Then \begin{equation*} Q_{S_q}(\de)\le Q_{q_1 U_1}(\de)\le\frac12\frac\de{q_1}\le\frac\de{2q_*}; \tag{1.5} \end{equation*} the first inequality in the above display is due to the fact that for any independent r.v.'s $X$ and $Y$ \begin{align*} Q_{X+Y}(\de)&=\sup_{x\in\R}\PP(|X+Y-x|\le\de/2) \\ &=\sup_{x\in\R}\int_\R \PP(Y\in dy)\PP(|X+y-x|\le\de/2) \\ &\le\int_\R \PP(Y\in dy)\sup_{x\in\R}\PP(|X+y-x|\le\de/2) =Q_X(\de). \end{align*}
Case 2: $q_1\le q_*$. Then, by the Berry--Esseen (BE) inequality, \begin{align*} Q_{S_q}(\de)&\le Q_Z(\de)+2A\frac{\sum_1^n \E|q_iU_i|^3}{(\sum_1^n \E(q_iU_i)^2)^{3/2}} \\ &\le\frac\de{\sqrt{2\pi}}+2A\sum_1^n q_i^3\,\frac{3\sqrt3}4 \le \frac\de{\sqrt{2\pi}}+2Aq_*\,\frac{3\sqrt3}4, \end{align*} where $Z\sim N(0,1)$ and $A$ is the universal positive real constant factor in the BE bound for non-iid independent r.v.'s; the best currently known value for $A$ is $0.56$ (I. G. Shevtsova. Refinement of estimates for the rate of convergence in Lyapunov's theorem. Dokl. Akad. Nauk, 435(1):26--28, 2010); a slightly worse value, $0.5606$, was a bit earlier established by Tyurin.
Thus, for all $q\in\Si_{n-1}$ \begin{equation*} Q_{S_q}(\de)\le\max\Big(\frac\de{2q_*},\frac\de{\sqrt{2\pi}}+2Aq_*\,\frac{3\sqrt3}4\Big). \tag{2} \end{equation*} So (1) follows by choosing $q_*=\sqrt\de$, say.
The OP later requested an explicit expression for $c$ providing for (0). Let us get this as well. The $\max$ in (2) is minimized in $q_*\in(0,1)$ when the two arguments of the $\max$ are equal to each other, that is, when \begin{equation*} q_*=q_*(\de):=\frac{\sqrt{6 \sqrt{3} \pi A \de +\de ^2}-\de}{3 \sqrt{6 \pi } A}. \end{equation*} Equating now $\frac\de{2q_*(\de)}$ (which equals the $\max$ minimized in $q_*$) with $\ep$ and solving for $\de$, we get \begin{equation*} \de=\de(\ep):=\frac{4 \sqrt{2 \pi } \ep ^2}{3 \sqrt{6 \pi } A+4 \ep }, \end{equation*} whence \begin{equation*} c=c(\ep):=\frac4{\de(\ep)^2}=\frac{\left(3 \sqrt{6 \pi } A+4 \ep\right)^2}{8 \pi \ep^4} \tag{3} \end{equation*} is enough for condition (0). In particular, using the mentioned best known value $0.56$ for $A$, we get $c(1/2)=54.98\dots$.
The OP has now also requested a lower bound $c$ providing for (0). Let us get this as well. Suppose that for some real $\ep\in(0,1)$, some real $c>0$, all natural $n$, and all nonzero $q=(q_1,\dots,q_n)\in\R^n$ we have (0), where, as before, $p=(U_1,\dots,U_n)$ and $U_1,\dots,U_n$ are iid r.v.'s uniformly distributed in $[-1,1]$. Then, choosing $q=(1,\dots,1)/\sqrt n$, we have \begin{equation} \PP(|T_n|\ge1/\sqrt c)\ge1-\ep, \end{equation} where \begin{equation} T_n:=\frac{\sum_1^n U_i}{\sqrt{\sum_1^n U_i^2}}. \end{equation} By the central limit theorem and the law of large numbers, $\sum_1^n U_i/\sqrt n\to Z\sim N(0,1)$ in distribution and $\sum_1^n U_i^2/n\to\E U_1^2=1/3$ in probability (say), so that $T_n\to Z\sqrt3\sim N(0,3)$ in distribution; the convergence here is for $n\to\infty$. So, \begin{equation} 2(1-\Phi(1/\sqrt{3c}))\ge1-\ep, \end{equation} where $\Phi$ is the standard normal cumulative distribution function; that is, \begin{equation} c\ge c_\ep:=\tfrac13\,/\Phi^{-1}(\tfrac12+\tfrac\ep2)^2\sim\tfrac2{3\pi\ep^2} \end{equation} as $\ep\downarrow0$, because $\Phi(x)-1/2\sim x/\sqrt{2\pi}$ as $x\to0$ and hence $\Phi^{-1}(\tfrac12+u)\sim u\sqrt{2\pi}$ as $\to0$.
This lower bound, $c_\ep$, on $c$ in (0) is quite a bit lower than the upper bound $c(\ep)$ in (3), especially for small $\ep$. For $\ep=1/2$, we have the lower bound $c_{1/2}=0.7327\dots$ on $c$ vs. the mentioned upper bound $c(1/2)=54.98\dots$. The gap is pretty large.
One can follow the lines of the above proof to see the causes of this gap. One such cause is the first inequality in (1.5). The other one is due to the use of the Berry--Esseen (BE) bound -- which is true for all distributions (with the worst case being discrete distributions, whereas the distributions of $q_i U_i$ are absolutely continuous with bounded densities) and uniformly for all deviations from the mean. One may try to improve the bounds on $c$, but I believe that would be quite outside the usual scope for MathOverflow answers. Here one may also note that in the possibly simpler case of the so-called nonuniform BE bounds, the gap between the constant factors in the best known upper bound ($25.80$) and in the best known lower bound ($1.0135$) is similarly very large; see e.g. the paper On the Nonuniform Berry--Esseen Bound or its arXiv version.
$\newcommand{\de}{\delta} \newcommand{\De}{\Delta} \newcommand{\ep}{\varepsilon} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\la}{\lambda} \newcommand{\Si}{\Sigma} \newcommand{\thh}{\theta} \newcommand{\R}{\mathbb{R}} \newcommand{\X}{\mathcal{X}} \newcommand{\E}{\operatorname{\mathsf E}} \newcommand{\PP}{\operatorname{\mathsf P}} \newcommand{\ii}[1]{\operatorname{\mathsf I}\{#1\}}$
Note that $\E(p^Tq)^2=q^Tq/3$ and, by the Rosenthal inequality, $\E(p^Tq)^4\ll(q^Tq)^2$. So, by the Paley--Zygmund inequality, \begin{equation} \PP((p^Tq)^2\ge q^Tq/6)\gg\frac{(\E(p^Tq)^2)^2}{\E(p^Tq)^4}\ge a, \end{equation} where $a>0$ is a universal constant. Since $p^Tp\le n$, we have $$(p^Tp)(q^Tq)\le 6n(p^Tq)^2$$ with probability $\ge a$. ($a$ may be less than $1/2$, though.)