Adaptive version of the Azuma–Hoeffding inequality
This inequality cannot be true. Let us rewrite it in the more common form $$P(R_n\ge x)\le e^{-x^2/2} \tag{1} $$ for $x\ge0$, where $R_n:=S_n/b_n$, $S_n:=\sum_1^n c_iB_i$, $b_n:=\sqrt{\sum_1^n c_i^2}$.
Let $n=2$, $c_1=1$, and $c_2=aI\{B_1=-1\}$, where $I\{\cdot\}$ denotes the indicator function, $a>0$ is large enough so that $\frac{-1+a}{\sqrt{1+a^2}}>x$, $x$ is less than $1$ but close enough to $1$ so that $e^{-x^2/2}<0.7$ (note that $e^{-1^2/2}=0.606\ldots<0.7$). Then $$P(R_2\ge x)=P(R_2\ge x,B_1=1)+P(R_2\ge x,B_1=-1)$$ $$=P(R_1\ge x,B_1=1)+P\Big(\frac{-1+aB_2}{\sqrt{1+a^2}}\ge x,B_1=-1\Big) $$ $$=P(B_1\ge x,B_1=1)+P(B_2=1,B_1=-1)$$ $$=P(B_1=1)+P(B_2=1,B_1=-1)$$ $$=\tfrac12+\tfrac14=0.75>0.7>e^{-x^2/2}, $$ so that $(1)$ fails to hold.
Letting now $c_3=\dots=c_n=0$, one disproves the inequality in question for any natural $n\ge2$.
(What is sometimes referred to as the Azuma (or Hoeffding--Azuma) inequality is due entirely to [Hoeffding 1963]; see the last paragraph of Section 2 there.)
Addendum: I doubt very much that any modification of the inequality in question can hold without preventing the sum of the conditional variances of the increments of the martingale from being too small; cf. e.g. inequalities (1.11) in [de la Peña]. That actually gave me the idea for the counterexample.
There is no such inequality even if we further restrict $c_k$ to be in $\{0,1\}$ and weaken the inequality to include a constant factor. (I think it is natural to add the condition that the $c_k$ values are uniformly bounded.) Suppose $c_k \in \{0,1\}$ and without loss of generality no $1$ follows a $0$. The choice of $c_k$ is equivalent to a stopping rule $\tau \le N$. We bet a constant amount on each toss of a coin and decide to stop after $\tau$ flips: $c_k = 0$ when $k \gt \tau$, so $c_k$ only depends on $X_1,...,X_{k-1}$.
Choose a constant $s \gt 0$. Can we bound
$$P\left(X_N \ge s \sqrt{\sum_{k=1}^N c_k^2}\right) = P(X_\tau \ge s \sqrt{\tau})?$$
The law of the iterated logarithm implies that for an infinite sequence $\{B_i\}$, the maximum number of standard deviations of $\sum B_i$ above the mean is almost surely unbounded, hence as $N\to \infty$ the maximum exceeds $s$ with probability approaching $1$. We can let $\tau$ be the minimum of $N$ and the first time $t$ so that $\sum_{i=1}^{t} B_i > s\sqrt{t}$.
If you let the bound depend on $N$, something like
$$P(\max \frac{X_t}{\sqrt{t}} \ge s) \le f(s,N),$$
then there are nontrivial bounds possible.