What exactly is a contradiction and how does it differ from falsity?
Your understanding is correct. Put simply, a contradiction is a sentence that is always false. More precisely,
A statement is a contradiction iff it is false in all interpretations.
In propositional logic, interpretations are valuation functions which assign propositional variables a truth value, so a contradiction comes down to having 0's as the final column in all rows (= valuations) of the truth table.
In predicate logic, interpretations are structures consisting of a domain of discourse and an interpretation function defining objects, functions and relations on it, so a contradiction is a statement which evaluates to false no matter the choice of objects and interpretation of the non-logical symbols.
Take the expression $\exists x (x < 0)$, for instance: This sentence is false in the structure of the natural numbers, but true when we evaluate it in the integers, or under some none-standard interpretation of the natural numbers where e.g. the symbol $<$ ist taken to mean "greater than". The statement is not valid (= true in all structures), but it is not contradictory (= false in all structures), either: While it may be coincidentally false in some particular structure/the situation we're currently interested in, it is logically possible for it to become true.
On the other hand, $\exists x (x < 0) \land \neg \exists x (x < 0)$ is true in neither of the above three structures structures; in fact, it fails to be true in any structure whatsoever: No matter which domain of objects we take and which interpretation we assign to the symbols $<$ and $0$, the form of the statement $A \land \neg A$ makes it inherently impossible to ever become true.
To pick up your example, "The sky is red" is only coincidentally false in the actual world because our earthly sky just so happens to be blue, but it is possible to imagine a universe in which the atmosphere is constituted differently and the sky is indeed red: The sentence false in the real world, but it is not contradictory. In symbols, the sentence can be formalized as $p$, and will have a truth table with both a true and a falsy column.
On the other hand, $x \in S \land x \not \in S$ is another statement of the form $A \land \neg A$, and thus a contradiction: It is false in all structures, and thus also in our real-world conception of sets in standard ZF set theory. Its truth table has only 0's, no matter which value the component statements take.
The symbol $\bot$ is used to refer to a contradiction. And indeed, any contradictory statement is logically equivalent to (and can be transformed into, using rules of inference) both $A \land \neg A$ and $\bot$: All contradictory statements have the same truth table with only 0's in the last column.
The following is less concrete than lemontree's answer and amWhy's comments, which I think are more on-point. However, I do think the following is worth saying, so I'm putting it here.
The snappy version, as you suspect, is:
A contradiction is never true in any situation. A statement is called false if it fails in the particular situation (or class of situations) we care about - but a false statement may nonetheless hold in a different situation (whereas a contradiction cannot).
Below I'll describe the two main ways of making this precise.
Semantic version
The "semantic" view of logic is that a logical system $\mathcal{L}$ is used to describe objects (or structures): basically, such an $\mathcal{L}$ consists of a class of sentences $Sent_\mathcal{L}$, a class of applicable structures $Struc_\mathcal{L}$, and a relation $\models_\mathcal{L}$ between applicable structures and sentences with $$\mathfrak{A}\models_\mathcal{L}\varphi$$ being interpreted as "the sentence $\varphi$ is true in the structure $\mathfrak{A}$."
A contradiction in the sense of $\mathcal{L}$, then, is a sentence which is not true in any structure: a $\psi$ such that for every $\mathfrak{A}$ we have $\mathfrak{A}\not\models_\mathcal{L}\psi$. By contrast, when we decide to focus on a particular structure $\mathfrak{S}$, we say that $\varphi$ is false iff $\mathfrak{S}\not\models_\mathcal{L}\varphi$.
Syntactic version
We can also refrain from talking about structures entirely. The "syntactic" view of logic is that a logical system is used to manipulate sentences (without necessarily assigning them particular meanings). Basically, such an $\mathcal{L}$ consists of a class of sentences $Sent_\mathcal{L}$ and a relation $\vdash_\mathcal{L}$ between sets of sentences and individual sentences with $$\Gamma\vdash_\mathcal{L}\varphi$$ being interpreted as "the sentence $\varphi$ is deducible from the set of sentences $\Gamma$."
A contradiction in this framework is then a sentence from which we can deduce anything: $\varphi$ is a contradiction in the sense of $\mathcal{L}$ iff for all $\psi$ we have $\{\varphi\}\vdash_\mathcal{L}\psi$. By contrast, when we say that a sentence $\varphi$ is false, what we mean is that we have in mind some particular "background set of sentences" $\Gamma$ and $\Gamma\cup\{\varphi\}$ would let us deduce anything (think of this $\Gamma$ as our set of axioms).
Connecting the two
It's worth noting that every semantic logic induces a syntactic logic: given a semantic logic $\mathcal{L}=(Sent_\mathcal{L}, Struc_\mathcal{L},\models_\mathcal{L})$ we get a syntactic logic $\mathcal{L}'=(Sent_{\mathcal{L}'}, \vdash_{\mathcal{L}'})$ defined as follows:
$Sent_{\mathcal{L}'}=Sent_\mathcal{L}$, that is, we use the same sentences for both logics.
We set $\Gamma\vdash_{\mathcal{L}'}\varphi$ iff whenever $\mathfrak{A}\in Struc_\mathcal{L}$ with $\mathfrak{A}\models_\mathcal{L}\psi$ for each $\psi\in\Gamma$, we have $\mathfrak{A}\models_\mathcal{L}\varphi$.
Note that this makes the two notions of "contradiction" line up: if $\varphi$ fails in every structure, then vacuously we have $\{\varphi\}\vdash_{\mathcal{L}'}\psi$ for every $\psi$.
There is also a way to go "syntax-to-semantics" which again makes the two notions of "contradiction" line up, but it's a bit less natural (basically we interpret "structure" as "set of sentences which doesn't deduce everything and is maximal with that property").
A caveat
Actually, the above isn't entirely accurate: there are logical systems where sentences of the form "$P\wedge\neg P$" do not let you deduce everything (these are called "paraconsistent logics;" another relevant (hehe) term is "relevance logics"). This leads to a more nuanced notion of "contradiction" and its relatives. But that's a more advanced topic which I wouldn't approach before first understanding the classical picture.