Convex combination iid Bernoulli random variables
One can get a bound which is within a constant of the optimal bound using the following
Paley-Zygmund type inequality Let $X$ be a real random variable with mean zero and finite fourth moment, that is not identically zero. Then $$ {\bf P}(X > 0) \geq \frac{({\bf E} X^2)^2}{4 {\bf E} X^4}.$$
Proof
By Holder we have
$$ {\bf E} X^2 1_{X>0} \leq ({\bf E} X^4)^{1/2} {\bf P}(X>0)^{1/2} \quad (1)$$
and
$$ {\bf E} X 1_{X>0} \leq ({\bf E} X^4)^{1/4} {\bf P}(X>0)^{3/4}$$
and hence by the mean zero hypothesis
$$ {\bf E} |X| 1_{X<0} \leq ({\bf E} X^4)^{1/4} {\bf P}(X>0)^{3/4}.$$
Hence by Holder again
$$ {\bf E} X^2 1_{X<0} \leq ({\bf E} |X| 1_{X<0})^{2/3} ({\bf E} |X|^4)^{1/3} \leq ({\bf E} X^4)^{1/2} {\bf P}(X>0)^{1/2} $$
which on summing with (1) gives
$$ {\bf E} X^2 \leq 2 ({\bf E} X^4)^{1/2} {\bf P}(X>0)^{1/2}$$
hence the claim. $\Box$
(It should be possible to improve the constant $4$ a bit by using the fact that the fourth moment has to be shared between the positive and negative components of $X$, but I have not tried to optimise this. The extremal relationship between ${\bf P}(X>0)$, ${\bf E} X^2$, and ${\bf E} X^4$ is probably coming from the case $X = \xi - p$ of a normalised Bernoulli random variable $\xi$. One can also obtain a comparable bound by applying the usual Paley-Zygmund inequality to $(X - \sqrt{\theta {\bf E} X^2})^2$ for some parameter $0 < \theta < 1$ that one can optimise in.)
In your situation, writing $X = \sum_i a_i (\xi_i - p)$ for the normalised sum of Bernoulli variables $\xi_i$, $X$ has mean zero, variance $p(1-p) \sum_i a_i^2$, and fourth moment $$ 6 \sum_{i<j} a_i^2 a_j^2 (p(1-p))^2 + \sum_i a_i^4 (p (1-p)^4 + (1-p) p^4)$$ $$ \leq \max( 3(p(1-p))^2, p (1-p)^4 + (1-p) p^4) \sum_{i,j} a_i^2 a_j^2$$ $$ = \max( 3p^2 (1-p)^2, p(1-p)(1-3p+3p^2)) (\sum_i a_i^2)^2$$ and hence $$ {\bf P}(X>0) \geq \frac{1}{4 \max( 3, (1-3p+3p^2)/p(1-p) )}$$ which is asymptotic to $p/4$ as $p \to 0$, or $(1-p)/4$ as $p \to 1$. One should be able to improve the constant $4$ with a bit more effort.
To complement my other answer, I will show
Proposition 1 Let $\xi_k$ be a finite number of iid Bernoulli random variables of expectation $p > 1/2$, and let $a_k > 0$ be real numbers. Then ${\bf P}( \sum_k a_k \xi_k \geq p \sum_k a_k) \gg 1$.
By replacing $\xi_k$ with $1-\xi_k$ and $p$ with $1-p$, this is equivalent to
Proposition 2 Let $\xi_k$ be a finite number of iid Bernoulli random variables of expectation $p < 1/2$, and let $a_k > 0$ be real numbers. Then ${\bf P}( \sum_k a_k \xi_k \leq p \sum_k a_k) \gg 1$.
Let's prove Proposition 2. I found a number of arguments to treat various special cases, which when combined together was able to cover the general case as follows.
We will need a large absolute constant $C_0$.
Case 1: (high multiplicity) For every integer $n$, the number of $a_k$ in the dyadic interval $(2^{n-1}, 2^n]$ is either zero, or at least $C_0/p$.
Roughly speaking this is the regime where the central limit theorem applies. In this case we can proceed by the Berry-Esseen inequality, which lets one estimate $$ {\bf P}( \sum_k a_k \xi_k \geq p \sum_k a_k) = \frac{1}{2} + O( \frac{p \sum_k a_k^3}{(p \sum_k a_k^2)^{3/2}} ).$$ If, for each $n$, we let $c_n$ be the number of $k$ for which $a_k \in (2^{n-1},2^n]$, we can write the right-hand side as $$ \frac{1}{2} + O( p^{-1/2} \frac{\sum_n 2^{3n} c_n}{(\sum_n 2^{2n} c_n)^{3/2}} ).$$ By hypothesis, each $c_n$ is either zero or at least $C_0/p$, so we can bound $$ 2^{3n} c_n \leq (C_0/p)^{-1/2} 2^{2n} c_n (\sum_{n'} 2^{2n'} c_{n'})^{1/2} $$ so the previous expression becomes $\frac{1}{2} + O( C_0^{-1/2} )$, and the claim follows for $C_0$ large enough.
Case 2: (low multiplicity) For every integer $n$, the number of $a_k$ in the dyadic interval $(2^{n-1}, 2^n]$ is at most $C_0/p$.
Here the Bernoulli variables are mostly zero and one can proceed using the first and second moment methods, after first applying a dyadic decomposition.
Let $n_0$ be the integer such that $p \sum_k a_k \in (2^{n_0-1}, 2^{n_0}]$. It will suffice to show that $$ {\bf P}( \sum_k a_k \xi_k \leq 2^{n_0-1} ) \gg_{C_0} 1.$$
The number of $k$ with $a_k > 2^{n_0-C_0}$ is at most $2^{C_0} / p$, and hence $$ {\bf P}( \sum_{k: a_k > 2^{n_0-C_0}} a_k \xi_k = 0 ) \geq (1-p)^{2^{C_0}/p} \gg_{C_0} 1.\qquad(1)$$ Next, for any $m \geq C_0$, we consider the random variable $$ \sum_{k: 2^{n_0-m-1} < a_k \leq 2^{n_0-m}} a_k \xi_k.$$ There are at most $C_0/p$ elements in this sum, so the second moment of this variable can be computed as $$ {\bf E} ( \sum_{k: 2^{n_0-m-1} < a_k \leq 2^{n_0-m}} a_k \xi_k )^2 \ll C_0^2 2^{2n_0-2m}.$$ By Markov's inequality we thus have $$ {\bf P} ( \sum_{k: 2^{n_0-m-1} < a_k \leq 2^{n_0-m}} a_k \xi_k \geq 2^{n_0-m/2} ) \ll C_0^2 2^{-m}.\qquad(2)$$
Applying the union bound to (2) for all $m \geq C_0$ and combining with (1) (using independence), we obtain the claim (if $C_0$ is large enough).
Case 3: (general case)
In this case we can partition the $k$ into two classes ${\mathcal K}_1$, ${\mathcal K}_2$, one of which is in the low multiplicity case and one of which is in the high multiplicity case. By the preceding cases we have
$${\bf P}( \sum_{k \in {\mathcal K}_1} a_k \xi_k \leq p \sum_{k \in {\mathcal K}_1} a_k) \gg 1$$ and $${\bf P}( \sum_{k \in {\mathcal K}_2} a_k \xi_k \leq p \sum_{k \in {\mathcal K}_2} a_k) \gg_{C_0} 1$$ so by independence we conclude $${\bf P}( \sum_k a_k \xi_k \leq p \sum_k a_k) \gg_{C_0} 1$$ as required.
I simplify Tao's proof of Proposition 2.
Arrange the $a_i$s is decreasing order, $a_1\geq a_2 \geq \cdots$. Let $C$ be a (large) absolute constant that will be determined later. Let $n_0=\lceil C/p\rceil$. Decompose each $a_i$ as $a_i=b_i+c_i$, where $b_i=\max\{0,a_i-a_{n_0}\}$ and $c_i=\min\{a_{n_0},a_i\}$. Note that the $b_i$s and $c_i$s are non-negative and decreasing.
We must bound away from zero the probability of the event $\{\sum a_i \xi_i \leq p\sum a_i\}\supset \{\sum b_i \xi_i \leq p\sum b_i\}\cap \{\sum c_i \xi_i \leq p\sum c_i\}$. By FKG Inequality (https://en.wikipedia.org/wiki/FKG_inequality ), the last two events are positively correlated; therefore it is sufficient to bound each one of them away from zero separately.
For the first event, $\Pr(\sum b_i \xi_i \leq p\sum b_i)\geq \Pr(x_1=\cdots=x_{n_0-1}=0)=(1-p)^{n_0-1}\geq e^{-C}$.
For the second event, we suppose $a_{n_0}>0$ (otherwise the event holds trivially) and normalize so that $a_{n_0}=1$. We apply the Berry-Esseen inequality to obtain $$ \Pr(\sum c_i \xi_i > p\sum c_i)= \frac 1 2 + O\left(\frac{p\sum c_i^3}{(p\sum c_i^2)^{3/2}}\right). $$
Since $c_1=\cdots=c_{n_0}=1$ and $c_i\leq 1$ (for all $i$), we can estimate the right hand-side by $$ \frac{p\sum c_i^3}{(p\sum c_i^2)^{3/2}}\leq p^{-1/2}\frac{\sum c_i^2}{(\sum c_i^2)^{3/2}}\leq (pn_0)^{-1/2}\leq C^{-1/2}. $$ Choosing $C$ large enough concludes the proof.