Talagrand's inequality for the discrete cube
Talagrand's inequality is more general. The inequality you ask about is Mcdiarmid's bounded-difference inequality (see, e.g., https://en.wikipedia.org/wiki/Doob_martingale and McDiarmid, Colin (1989). "On the Method of Bounded Differences". Surveys in Combinatorics. 141: 148–188), which is a direct consequence of the Hoeffding-Azuma inequality. there is no convexity assumption needed there. McDiarmid's bound (which leads to weaker concentration) is essentially sharp for functions that are Lipschitz in the $\ell^1$ metric, which is often the most relevant to combinatorial applications; in that case convexity is not needed and does not help: Consider $f(x_1,x_2,...,x_n)=\sum_i x_i$. Requiring Lipschitz in $\ell^2$ is a much stronger requirement, which also leads to better concentration for convex functions and I now understand that was the assumption of interest to the OP.
there is no convexity assumption needed there.
Nevertheless here is a counterexample of $1$-Lipschitz function on $\mathbb{R}^{N}$ which fails to satisfy concentration inequality on the Hamming cube $\{-1,1\}^{N}$ as $N$ goes to infinity.
Take $N$ to be even. Let $$ A = \left\{ (x_{1}, \ldots, x_{N}) \in \{0,1\}^{N}\, : x_{1}+\ldots+x_{N}\leq \frac{N}{2}\right\}. $$ Next, define a function
$$ f(x) = \inf_{y \in A} \| x-y\|_{\mathbb{R}^{N}} $$ Since $f$ is the distance function to a nonempty subset it follows that $f$ is $1$-Lipschitz (an exercise).
On the other hand notice that $f(x) = \sqrt{\max\{x_{1}+\ldots+x_{N} - \frac{N}{2},0\}}$ on $\{0,1\}^N$. Then as $N$ goes to infinity we have $$ \begin{aligned} P(|f & -\mathbb{E}f| > N^{1/4}) \\ & = P\biggl(\biggl|\sqrt{\max\{\frac{(2x_{1}-1)+\ldots+(2x_{N}-1)}{\sqrt{N}},0\}} \\ & \qquad\qquad - \mathbb{E}\sqrt{\max\{\frac{(2x_{1}-1)+\ldots+(2x_{N}-1)}{\sqrt{N}},0\}} \biggr|>\sqrt{2} \biggr) \\ & \to P\left( | \sqrt{\max\{\xi, 0\}} - \mathbb{E} \sqrt{\max\{\xi, 0\}}|>\sqrt{2}\right)>10^{-10}, \end{aligned} $$ where we used the central limit theorem, and $\xi$ is the standard normal Gaussian $\xi\in N(0,1)$.