Why is propositional logic not Turing complete?
Aren't all computations in computers performed using logic gates, which can be represented as logical operators, though?
Using logic gates with feedback. Using only propositional logic it's impossible to express arbitrary repetition, i.e., using the same subformula multiple times. In programming terms, this means no loops, recursion, jumps, or anything similar. Memory is also an issue, but without a means of repetition even with some notion of unbounded memory you'd only be able to use a fixed amount of it.
Propositional logic is among the computationally weakest systems that anyone actually cares about, being unable to even express basic arithmetic (i.e., with numbers of arbitrary size, as opposed to the fixed size of something like an 8-bit adder) on its own.
What kind of computations is propositional logic unable to perform?
Most of them!
As far as Turing-completeness goes, that essentially means "the ability to perform computations which we're not sure will ever finish", but there's a huge range of computations that are known to always finish but which propositional logic can't express.
A good example of another (relatively weak) system that people care about is recognizing regular languages. You can't express this in propositional logic because the input size is variable, and even if you set a hard limit on the input size the best you can do in general is to have a formula that computes a massive OR of subformulas to recognize each possible matching string.